Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations
International Nuclear Information System (INIS)
Carter, L.L.; Hendricks, J.S.
1983-01-01
The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays
Cold moderator scattering kernels
International Nuclear Information System (INIS)
MacFarlane, R.E.
1989-01-01
New thermal-scattering-law files in ENDF format have been developed for solid methane, liquid methane liquid ortho- and para-hydrogen, and liquid ortho- and para-deuterium using up-to-date models that include such effects as incoherent elastic scattering in the solid, diffusion and hindered vibration and rotations in the liquids, and spin correlations for the hydrogen and deuterium. These files were generated with the new LEAPR module of the NJOY Nuclear Data Processing System. Other modules of this system were used to produce cross sections for these moderators in the correct format for the continuous-energy Monte Carlo code (MCNP) being used for cold-moderator-design calculations at the Los Alamos Neutron Scattering Center (LANSCE). 20 refs., 14 figs
Point kernels and superposition methods for scatter dose calculations in brachytherapy
International Nuclear Information System (INIS)
Carlsson, A.K.
2000-01-01
Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)
Scattering kernels and cross sections working group
International Nuclear Information System (INIS)
Russell, G.; MacFarlane, B.; Brun, T.
1998-01-01
Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics
Analytic scattering kernels for neutron thermalization studies
International Nuclear Information System (INIS)
Sears, V.F.
1990-01-01
Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results
SCAP-82, Single Scattering, Albedo Scattering, Point-Kernel Analysis in Complex Geometry
International Nuclear Information System (INIS)
Disney, R.K.; Vogtman, S.E.
1987-01-01
1 - Description of problem or function: SCAP solves for radiation transport in complex geometries using the single or albedo scatter point kernel method. The program is designed to calculate the neutron or gamma ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user specified discrete scattering volume. Geometry is describable by zones bounded by intersecting quadratic surfaces within an arbitrary maximum number of boundary surfaces per zone. Anisotropic point sources are describable as pointwise energy dependent distributions of polar angles on a meridian; isotropic point sources may also be specified. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a build- up factor approximation to account for multiple scatter on the scat- ter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as a distribution of isotropic point sources, with un-collided line-of-sight attenuation and buildup calculated between each source point and the detector point. 2 - Method of solution: A point kernel method using an anisotropic or isotropic point source representation is used, line-of-sight material attenuation and inverse square spatial attenuation between the source point and scatter points and the scatter points and detector point is employed. A direct summation of individual point source results is obtained. 3 - Restrictions on the complexity of the problem: - The SCAP program is written in complete flexible dimensioning so that no restrictions are imposed on the number of energy groups or geometric zones. The geometric zone description is restricted to zones defined by boundary surfaces defined by the general quadratic equation or one of its degenerate forms. The only restriction in the program is that the total
Ideal gas scattering kernel for energy dependent cross-sections
International Nuclear Information System (INIS)
Rothenstein, W.; Dagan, R.
1998-01-01
A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations
International Nuclear Information System (INIS)
Takahashi, Akito; Yamamoto, Junji; Ebisuya, Mituo; Sumita, Kenji
1979-01-01
A new method for calculating the anisotropic neutron transport is proposed for the angular spectral analysis of D-T fusion reactor neutronics. The method is based on the transport equation with new type of anisotropic scattering kernels formulated by a single function I sub(i) (μ', μ) instead of polynomial expansion, for instance, Legendre polynomials. In the calculation of angular flux spectra by using scattering kernels with the Legendre polynomial expansion, we often observe the oscillation with negative flux. But in principle this oscillation disappears by this new method. In this work, we discussed anisotropic scattering kernels of the elastic scattering and the inelastic scatterings which excite discrete energy levels. The other scatterings were included in isotropic scattering kernels. An approximation method, with use of the first collision source written by the I sub(i) (μ', μ) function, was introduced to attenuate the ''oscillations'' when we are obliged to use the scattering kernels with the Legendre polynomial expansion. Calculated results with this approximation showed remarkable improvement for the analysis of the angular flux spectra in a slab system of lithium metal with the D-T neutron source. (author)
Reconstruction of atomic effective potentials from isotropic scattering factors
International Nuclear Information System (INIS)
Romera, E.; Angulo, J.C.; Torres, J.J.
2002-01-01
We present a method for the approximate determination of one-electron effective potentials of many-electron systems from a finite number of values of the isotropic scattering factor. The method is based on the minimum cross-entropy technique. An application to some neutral ground-state atomic systems has been done within a Hartree-Fock framework
Electromagnetic illusion with isotropic and homogeneous materials through scattering manipulation
International Nuclear Information System (INIS)
Yang, Fan; Mei, Zhong Lei; Jiang, Wei Xiang; Cui, Tie Jun
2015-01-01
A new isotropic and homogeneous illusion device for electromagnetic waves is proposed. This single-shelled device can change the fingerprint of the covered object into another one by manipulating the scattering of the composite structure. We show that an electrically small sphere can be disguised as another small one with different electromagnetic parameters. The device can even make a dielectric sphere (electrically small) behave like a conducting one. Full-wave simulations confirm the performance of proposed illusion device. (paper)
Graphical analyses of connected-kernel scattering equations
International Nuclear Information System (INIS)
Picklesimer, A.
1982-10-01
Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The graphical method also leads to a new, simplified form for some members of the class and elucidates the general structural features of the entire class
Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code
International Nuclear Information System (INIS)
Rothenstein, W.
1999-01-01
In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T
Kernel integration scatter model for parallel beam gamma camera and SPECT point source response
International Nuclear Information System (INIS)
Marinkovic, P.M.
2001-01-01
Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)
Graphical analyses of connected-kernel scattering equations
International Nuclear Information System (INIS)
Picklesimer, A.
1983-01-01
Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The basic result is the application of graphical methods to the derivation of interaction-set equations. This yields a new, simplified form for some members of the class and elucidates the general structural features of the entire class
Study on the scattering law and scattering kernel of hydrogen in zirconium hydride
International Nuclear Information System (INIS)
Jiang Xinbiao; Chen Wei; Chen Da; Yin Banghua; Xie Zhongsheng
1999-01-01
The nuclear analytical model of calculating scattering law and scattering kernel for the uranium zirconium hybrid reactor is described. In the light of the acoustic and optic model of zirconium hydride, its frequency distribution function f(ω) is given and the scattering law of hydrogen in zirconium hydride is obtained by GASKET. The scattering kernel σ l (E 0 →E) of hydrogen bound in zirconium hydride is provided by the SMP code in the standard WIMS cross section library. Along with this library, WIMS is used to calculate the thermal neutron energy spectrum of fuel cell. The results are satisfied
International Nuclear Information System (INIS)
Drozdowicz, K.
1995-01-01
A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs
Development of Cold Neutron Scattering Kernels for Advanced Moderators
International Nuclear Information System (INIS)
Granada, J. R.; Cantargi, F.
2010-01-01
The development of scattering kernels for a number of molecular systems was performed, including a set of hydrogeneous methylated aromatics such as toluene, mesitylene, and mixtures of those. In order to partially validate those new libraries, we compared predicted total cross sections with experimental data obtained in our laboratory. In addition, we have introduced a new model to describe the interaction of slow neutrons with solid methane in phase II (stable phase below T = 20.4 K, atmospheric pressure). Very recently, a new scattering kernel to describe the interaction of slow neutrons with solid Deuterium was also developed. The main dynamical characteristics of that system are contained in the formalism, the elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects.
Djebbi, Ramzi
2016-02-05
In anisotropic media, several parameters govern the propagation of the compressional waves. To correctly invert surface recorded seismic data in anisotropic media, a multi-parameter inversion is required. However, a tradeoff between parameters exists because several models can explain the same dataset. To understand these tradeoffs, diffraction/reflection and transmission-type sensitivity-kernels analyses are carried out. Such analyses can help us to choose the appropriate parameterization for inversion. In tomography, the sensitivity kernels represent the effect of a parameter along the wave path between a source and a receiver. At a given illumination angle, similarities between sensitivity kernels highlight the tradeoff between the parameters. To discuss the parameterization choice in the context of finite-frequency tomography, we compute the sensitivity kernels of the instantaneous traveltimes derived from the seismic data traces. We consider the transmission case with no encounter of an interface between a source and a receiver; with surface seismic data, this corresponds to a diving wave path. We also consider the diffraction/reflection case when the wave path is formed by two parts: one from the source to a sub-surface point and the other from the sub-surface point to the receiver. We illustrate the different parameter sensitivities for an acoustic transversely isotropic medium with a vertical axis of symmetry. The sensitivity kernels depend on the parameterization choice. By comparing different parameterizations, we explain why the parameterization with the normal moveout velocity, the anellipitic parameter η, and the δ parameter is attractive when we invert diving and reflected events recorded in an active surface seismic experiment. © 2016 European Association of Geoscientists & Engineers.
Simulation of isotropic scattering of charged particles by composed potentials
Gerasimov, O Y
2003-01-01
The analytical model of scattering of charged particles by a multicentered adiabatic potential which consists of the long-range Coulomb and short-range potentials is used for the parametrization of experiments of elastic low-energy proton-deuteron scattering. For the energies 2.26-13 MeV, the analytical expressions for the phase scattering function in terms of identical parameters which depend on the lengths and effective radii of proton-proton and proton-neutron scattering and on the effective size of deuteron are obtained. The results are in good qualitative accordance with experiments.
Impact of the Improved Resonance Scattering Kernel on HTR Calculations
International Nuclear Information System (INIS)
Becker, B.; Dagan, R.; Broeders, C.H.M.; Lohnert, G.
2008-01-01
The importance of an advanced neutron scattering model for heavy isotopes with strong energy dependent cross sections such as the pronounced resonances of U 238 has been discussed in various publications where the full double differential scattering kernel was derived. In this study we quantify the effect of the new scattering model for specific innovative types of High Temperature Reactor (HTR) systems which commonly exhibit a higher degree of heterogeneity and higher fuel temperatures, hence increasing the importance of the secondary neutron energy distribution. In particular the impact on the multiplication factor (k ∞ ) and the Doppler reactivity coefficient is presented in view of the packing factors and operating temperatures. A considerable reduction of k ∞ (up to 600 pcm) and an increased Doppler reactivity (up to 10%) is observed. An increase of up to 2.3% of the Pu 239 inventory can be noticed at 90 MWd/tHM burnup due to enhanced neutron absorption of U 238 . Those effects are more pronounced for design cases in which the neutron flux spectrum is hardened towards the resolved resonance range. (authors)
International Nuclear Information System (INIS)
Rothenstein, W.
2004-01-01
The current study is a sequel to a paper by Rothenstein and Dagan [Ann. Nucl. Energy 25 (1998) 209] where the ideal gas based kernel for scatterers with internal structure was introduced. This double differential kernel includes the neutron energy after scattering as well as the cosine of the scattering angle for isotopes with strong scattering resonances. A new mathematical formalism enables the inclusion of the new kernel in NJOY [MacFarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91 (LA-12740-m)]. Moreover the computational time of the new kernel is reduced significantly, feasible for practical application. The completeness of the new kernel is proven mathematically and demonstrated numerically. Modifications necessary to remove the existing inconsistency of the secondary energy distribution in NJOY are presented
International Nuclear Information System (INIS)
Kornreich, D.E.; Ganapol, B.D.
1997-01-01
The linear Boltzmann equation for the transport of neutral particles is investigated with the objective of generating benchmark-quality evaluations of solutions for homogeneous infinite media. In all cases, the problems are stationary, of one energy group, and the scattering is isotropic. The solutions are generally obtained through the use of Fourier transform methods with the numerical inversions constructed from standard numerical techniques such as Gauss-Legendre quadrature, summation of infinite series, and convergence acceleration. Consideration of the suite of benchmarks in infinite homogeneous media begins with the standard one-dimensional problems: an isotropic point source, an isotropic planar source, and an isotropic infinite line source. The physical and mathematical relationships between these source configurations are investigated. The progression of complexity then leads to multidimensional problems with source configurations that also emit particles isotropically: the finite line source, the disk source, and the rectangular source. The scalar flux from the finite isotropic line and disk sources will have a two-dimensional spatial variation, whereas a finite rectangular source will have a three-dimensional variation in the scalar flux. Next, sources emitting particles anisotropically are considered. The most basic such source is the point beam giving rise to the Green's function, which is physically the most fundamental transport problem, yet may be constructed from the isotropic point source solution. Finally, the anisotropic plane and anisotropically emitting infinite line sources are considered. Thus, a firm theoretical and numerical base is established for the most fundamental neutral particle benchmarks in infinite homogeneous media
Preliminary scattering kernels for ethane and triphenylmethane at cryogenic temperatures
Cantargi, F.; Granada, J. R.; Damián, J. I. Márquez
2017-09-01
Two potential cold moderator materials were studied: ethane and triphenylmethane. The first one, ethane (C2H6), is an organic compound which is very interesting from the neutronic point of view, in some respects better than liquid methane to produce subthermal neutrons, not only because it remains in liquid phase through a wider temperature range (Tf = 90.4 K, Tb = 184.6 K), but also because of its high protonic density together with its frequency spectrum with a low rotational energy band. Another material, Triphenylmethane is an hydrocarbon with formula C19H16 which has already been proposed as a good candidate for a cold moderator. Following one of the main research topics of the Neutron Physics Department of Centro Atómico Bariloche, we present here two ways to estimate the frequency spectrum which is needed to feed the NJOY nuclear data processing system in order to generate the scattering law of each desired material. For ethane, computer simulations of molecular dynamics were done, while for triphenylmethane existing experimental and calculated data were used to produce a new scattering kernel. With these models, cross section libraries were generated, and applied to neutron spectra calculation.
Preliminary scattering kernels for ethane and triphenylmethane at cryogenic temperatures
Directory of Open Access Journals (Sweden)
Cantargi F.
2017-01-01
Full Text Available Two potential cold moderator materials were studied: ethane and triphenylmethane. The first one, ethane (C2H6, is an organic compound which is very interesting from the neutronic point of view, in some respects better than liquid methane to produce subthermal neutrons, not only because it remains in liquid phase through a wider temperature range (Tf = 90.4 K, Tb = 184.6 K, but also because of its high protonic density together with its frequency spectrum with a low rotational energy band. Another material, Triphenylmethane is an hydrocarbon with formula C19H16 which has already been proposed as a good candidate for a cold moderator. Following one of the main research topics of the Neutron Physics Department of Centro Atómico Bariloche, we present here two ways to estimate the frequency spectrum which is needed to feed the NJOY nuclear data processing system in order to generate the scattering law of each desired material. For ethane, computer simulations of molecular dynamics were done, while for triphenylmethane existing experimental and calculated data were used to produce a new scattering kernel. With these models, cross section libraries were generated, and applied to neutron spectra calculation.
International Nuclear Information System (INIS)
Sanchez G, J.
2015-09-01
The solution of the so-called Canonical problems of neutron transport theory has been given by Case, who developed a method akin to the classical eigenfunction expansion procedure, extended to admit singular eigenfunctions. The solution is given as a set consisting of a Fredholm integral equation coupled with a transcendental equation, which has to be solved for the expansion coefficients by iteration. CASE's method make extensive use of the results of the theory of functions of a complex variable and many successful approaches to solve in an approximate form the above mentioned set have been reported in the literature. We present here an entirely different approach which deals with the canonical problems in a more direct and elementary manner. As far as we know, the original idea for the latter method is due to Carlvik who devised the escape probability approximation to the solution of the neutron transport equation in its integral form. In essence, the procedure consists in assuming a sectionally constant form of the neutron density that in turn yields a set of linear algebraic equations obeyed by the assumed constant values of the density. Very well established techniques of numerical analysis for the solution of integral equations consist in independent approaches that generalize the sectionally constant approach by assuming a sectionally low degree polynomial for the unknown function. This procedure also known as the arbitrary quadratures method is especially suited to deal with cases where the kernel of the integral equation is singular. The author wishes to present the results obtained with the arbitrary quadratures method for the numerical calculation of the monoenergetic neutron density in a critical, homogeneous sphere of finite radius with isotropic scattering. The singular integral equation obeyed by the neutron density in the critical sphere is introduced, an outline of the method's main features is given, and tables and graphs of the density
Energy Technology Data Exchange (ETDEWEB)
Sanchez G, J., E-mail: julian.sanchez@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2015-09-15
The solution of the so-called Canonical problems of neutron transport theory has been given by Case, who developed a method akin to the classical eigenfunction expansion procedure, extended to admit singular eigenfunctions. The solution is given as a set consisting of a Fredholm integral equation coupled with a transcendental equation, which has to be solved for the expansion coefficients by iteration. CASE's method make extensive use of the results of the theory of functions of a complex variable and many successful approaches to solve in an approximate form the above mentioned set have been reported in the literature. We present here an entirely different approach which deals with the canonical problems in a more direct and elementary manner. As far as we know, the original idea for the latter method is due to Carlvik who devised the escape probability approximation to the solution of the neutron transport equation in its integral form. In essence, the procedure consists in assuming a sectionally constant form of the neutron density that in turn yields a set of linear algebraic equations obeyed by the assumed constant values of the density. Very well established techniques of numerical analysis for the solution of integral equations consist in independent approaches that generalize the sectionally constant approach by assuming a sectionally low degree polynomial for the unknown function. This procedure also known as the arbitrary quadratures method is especially suited to deal with cases where the kernel of the integral equation is singular. The author wishes to present the results obtained with the arbitrary quadratures method for the numerical calculation of the monoenergetic neutron density in a critical, homogeneous sphere of finite radius with isotropic scattering. The singular integral equation obeyed by the neutron density in the critical sphere is introduced, an outline of the method's main features is given, and tables and graphs of the density
Thermal neutron scattering kernels for sapphire and silicon single crystals
International Nuclear Information System (INIS)
Cantargi, F.; Granada, J.R.; Mayer, R.E.
2015-01-01
Highlights: • Thermal cross section libraries for sapphire and silicon single crystals were generated. • Debye model was used to represent the vibrational frequency spectra to feed the NJOY code. • Sapphire total cross section was measured at Centro Atómico Bariloche. • Cross section libraries were validated with experimental data available. - Abstract: Sapphire and silicon are materials usually employed as filters in facilities with thermal neutron beams. Due to the lack of the corresponding thermal cross section libraries for those materials, necessary in calculations performed in order to optimize beams for specific applications, here we present the generation of new thermal neutron scattering kernels for those materials. The Debye model was used in both cases to represent the vibrational frequency spectra required to feed the NJOY nuclear data processing system in order to produce the corresponding libraries in ENDF and ACE format. These libraries were validated with available experimental data, some from the literature and others obtained at the pulsed neutron source at Centro Atómico Bariloche
The slab albedo problem for the triplet scattering kernel with modified F{sub N} method
Energy Technology Data Exchange (ETDEWEB)
Tuereci, Demet [Ministry of Education, 75th year Anatolia High School, Ankara (Turkey)
2016-12-15
One speed, time independent neutron transport equation for a slab geometry with the quadratic anisotropic scattering kernel is considered. The albedo and transmission factor are calculated by the modified F{sub N} method. The obtained numerical results are listed for different scattering coefficients.
Energy Technology Data Exchange (ETDEWEB)
Tuereci, R. Goekhan [Kirikkale Univ. (Turkey). Kirikkale Vocational School; Tuereci, D. [Ministry of Education, Ankara (Turkey). 75th year Anatolia High School
2017-11-15
One speed, time independent and homogeneous medium neutron transport equation is solved with the anisotropic scattering which includes both the linearly and the quadratically anisotropic scattering kernel. Having written Case's eigenfunctions and the orthogonality relations among of these eigenfunctions, slab albedo problem is investigated as numerically by using Modified F{sub N} method. Selected numerical results are presented in tables.
A Stochastic Proof of the Resonant Scattering Kernel and its Applications for Gen IV Reactors Type
International Nuclear Information System (INIS)
Becker, B.; Dagan, R.; Broeders, C.H.M.; Lohnert, G.
2008-01-01
Monte Carlo codes such as MCNP are widely accepted as almost-reference for reactor analysis. The Monte Carlo Code should therefore use as few as possible approximations in order to produce 'experimental-level' calculations. In this study we deal with one of the most problematic approximations done in MCNP in which the resonances are ignored for the secondary neutron energy distribution, namely the change of the energy and angular direction of the neutron after interaction with a heavy isotope with pronounced resonances. The endeavour of exploiting the influence of the resonances on the scattering kernel goes back to 1944 where E. Wigner and J. Wilkins developed the first temperature dependent scattering kernel. However only in 1998, the full analytical solution for the double differential resonant dependent scattering kernel was suggested by W. Rothenstein and R. Dagan. An independent stochastic approach is presented for the first time to confirm the above analytical kernel with a complete different methodology. Moreover, by manipulating in a subtle manner the scattering subroutine COLIDN of MCNP, it is proven that this very subroutine is, to some extent, inappropriate as well as the relevant explanation in the MCNP manual. The impact of this improved resonance dependent scattering kernel on diverse types of reactors, in particular for the Generation IV innovative core design HTR, is shown to be significant. (authors)
International Nuclear Information System (INIS)
Williams, M.M.R.
1985-01-01
A multigroup formalism is developed for the backward-forward-isotropic scattering model of neutron transport. Some exact solutions are obtained in two-group theory for slab and spherical geometry. The results are useful for benchmark problems involving multigroup anisotropic scattering. (author)
ZZ THERMOS, Multigroup P0 to P5 Thermal Scattering Kernels from ENDF/B Scattering Law Data
International Nuclear Information System (INIS)
McCrosson, F.J.; Finch, D.R.
1975-01-01
1 - Description of problem or function: Number of groups: 30-group THERMOS thermal scattering kernels. Nuclides: Molecular H 2 O, Molecular D 2 O, Graphite, Polyethylene, Benzene, Zr bound in ZrHx, H bound in ZrHx, Beryllium-9, Beryllium Oxide, Uranium Dioxide. Origin: ENDF/B library. Weighting Spectrum: yes. These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code (NESC Abstract 368). To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tape library from which they may be retrieved easily for use in any 30-group THERMOS library. The contents of the tapes are as follows - (Material: ZA/Temperatures (degrees K)): Molecular H 2 O: 100.0/296, 350, 400, 450, 500, 600, Molecular D 2 O: 101.0/296, 350, 400, 450, 500, 600, Graphite: 6000.0/296, 400, 500, 600, 700, 800, Polyethylene: 205.0/296, 350 Benzene: 106.0/296, 350, 400, 450, 500, 600, Zr bound in ZrHx: 203.0/296, 400, 500, 600, 700, 800, H bound in ZrHx: 230.0/296, 400, 500, 600, 700, 800, Beryllium-9: 4009.0/296, 400, 500, 600, 700, 800, Beryllium Oxide: 200.0/296, 400, 500, 600, 700, 800, Uranium Dioxide: 207.0/296, 400, 500, 600, 700, 800 2 - Method of solution: Kernel generation is performed by direct integration of the thermal scattering law data to obtain the differential scattering cross sections for each Legendre order. The integral parameter calculation is done by precision integration of the diffusion length equation for several moderator absorption cross sections followed by a
A Calculation of the Angular Moments of the Kernel for a Monatomic Gas Scatterer
Energy Technology Data Exchange (ETDEWEB)
Haakansson, Rune
1964-07-15
B. Davison has given in an unpublished paper a method of calculating the moments of the monatomic gas scattering kernel. We present here this method and apply it to calculate the first four moments. Numerical results for these moments for the masses M = 1 and 3.6 are also given.
International Nuclear Information System (INIS)
Duo, J. I.; Azmy, Y. Y.
2007-01-01
A new method, the Singular Characteristics Tracking algorithm, is developed to account for potential non-smoothness across the singular characteristics in the exact solution of the discrete ordinates approximation of the transport equation. Numerical results show improved rate of convergence of the solution to the discrete ordinates equations in two spatial dimensions with isotropic scattering using the proposed methodology. Unlike the standard Weighted Diamond Difference methods, the new algorithm achieves local convergence in the case of discontinuous angular flux along the singular characteristics. The method also significantly reduces the error for problems where the angular flux presents discontinuous spatial derivatives across these lines. For purposes of verifying the results, the Method of Manufactured Solutions is used to generate analytical reference solutions that permit estimating the local error in the numerical solution. (authors)
International Nuclear Information System (INIS)
Adams, J.E.
1979-05-01
The difficulty of applying the WKB approximation to problems involving arbitrary potentials has been confronted. Recent work has produced a convenient expression for the potential correction term. However, this approach does not yield a unique correction term and hence cannot be used to construct the proper modification. An attempt is made to overcome the uniqueness difficulties by imposing a criterion which permits identification of the correct modification. Sections of this work are: semiclassical eigenvalues for potentials defined on a finite interval; reactive scattering exchange kernels; a unified model for elastic and inelastic scattering from a solid surface; and selective absorption on a solid surface
Calculation of the Kernel scattering for thermal neutrons in H2O e D2O
International Nuclear Information System (INIS)
Leal, L.C.; Assis, J.T. de
1981-01-01
A computer code, using the Nelkin-and Butler models for the calculations of the Kernel scattering, was developed. Calculations of the thermal neutron flux in an homogeneous-and infinite medium with a 1 /v absorber in 30 energy groups were done and compared with experimental data. The reactors parameters calculated by the Hammer code (in the original version and with the new library generated by the authors' code) are presented. (E.G) [pt
International Nuclear Information System (INIS)
Li Heng; Mohan, Radhe; Zhu, X Ronald
2008-01-01
The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.
Proof and implementation of the stochastic formula for ideal gas, energy dependent scattering kernel
International Nuclear Information System (INIS)
Becker, B.; Dagan, R.; Lohnert, G.
2009-01-01
The ideal gas, scattering kernel for heavy nuclei with pronounced resonances was developed [Rothenstein, W., Dagan, R., 1998. Ann. Nucl. Energy 25, 209-222], proved and implemented [Rothenstein, W., 2004 Ann. Nucl. Energy 31, 9-23] in the data processing code NJOY [Macfarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91, LA-12740-M] from which the scattering probability tables were prepared [Dagan, R., 2005. Ann. Nucl. Energy 32, 367-377]. Those tables were introduced to the well known MCNP code [X-5 Monte Carlo Team. MCNP - A General Monte Carlo N-Particle Transport Code version 5 LA-UR-03-1987 code] via the 'mt' input cards in the same manner as it is done for light nuclei in the thermal energy range. In this study we present an alternative methodology for solving the double differential energy dependent scattering kernel which is based solely on stochastic consideration as far as the scattering probabilities are concerned. The solution scheme is based on an alternative rejection scheme suggested by Rothenstein [Rothenstein, W. ENS conference 1994 Tel Aviv]. Based on comparison with the above mentioned analytical (probability S(α,β)-tables) approach it is confirmed that the suggested rejection scheme provides accurate results. The uncertainty concerning the magnitude of the bias due to the enhanced multiple rejections during the sampling procedure are proved to lie within 1-2 standard deviations for all practical cases that were analysed.
A scatter model for fast neutron beams using convolution of diffusion kernels
International Nuclear Information System (INIS)
Moyers, M.F.; Horton, J.L.; Boyer, A.L.
1988-01-01
A new model is proposed to calculate dose distributions in materials irradiated with fast neutron beams. Scattered neutrons are transported away from the point of production within the irradiated material in the forward, lateral and backward directions, while recoil protons are transported in the forward and lateral directions. The calculation of dose distributions, such as for radiotherapy planning, is accomplished by convolving a primary attenuation distribution with a diffusion kernel. The primary attenuation distribution may be quickly calculated for any given set of beam and material conditions as it describes only the magnitude and distribution of first interaction sites. The calculation of energy diffusion kernels is very time consuming but must be calculated only once for a given energy. Energy diffusion distributions shown in this paper have been calculated using a Monte Carlo type of program. To decrease beam calculation time, convolutions are performed using a Fast Fourier Transform technique. (author)
Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc
2015-07-07
The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient
International Nuclear Information System (INIS)
Valente, Mauro; Botta, Francesca; Pedroli, Guido
2012-01-01
Beta-emitters have proved to be appropriate for radioimmunotherapy. The dosimetric characterization of each radionuclide has to be carefully investigated. One usual and practical dosimetric approach is the calculation of dose distribution from a unit point source emitting particles according to any radionuclide of interest, which is known as dose point kernel. Absorbed dose distributions are due to primary and radiation scattering contributions. This work presented a method capable of performing dose distributions for nuclear medicine dosimetry by means of Monte Carlo methods. Dedicated subroutines have been developed in order to separately compute primary and scattering contributions to the total absorbed dose, performing particle transport up to 1 keV or least. Preliminarily, the suitability of the calculation method has been satisfactory, being tested for monoenergetic sources, and it was further applied to the characterization of different beta-minus radionuclides of nuclear medicine interests for radioimmunotherapy. (author)
International Nuclear Information System (INIS)
Radhakrishnan, G.
2003-01-01
Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted
Scattering of obliquely incident standing wave by a rotating transversely isotropic cylinder
CSIR Research Space (South Africa)
Shatalov, MY
2006-05-01
Full Text Available stream_source_info Shatalov2_2006.pdf.txt stream_content_type text/plain stream_size 15905 Content-Encoding UTF-8 stream_name Shatalov2_2006.pdf.txt Content-Type text/plain; charset=UTF-8 1 CSIR Material Science..., Tshwane University of Technology, South Africa. 2 CSIR Material Science and Manufacturing Abstract It is known that vibrating patterns of an isotropic cylinder, subjected to inertial rotation over the symmetry axis, precess in the direction...
Multi-parameter Analysis and Inversion for Anisotropic Media Using the Scattering Integral Method
Djebbi, Ramzi
2017-01-01
the model. I study the prospect of applying a scattering integral approach for multi-parameter inversion for a transversely isotropic model with a vertical axis of symmetry. I mainly analyze the sensitivity kernels to understand the sensitivity of seismic
Li, Mingda; Cui, Wenping; Dresselhaus, M. S.; Chen, Gang; MIT Team; Boston College Team
Crystal dislocations govern the plastic mechanical properties of materials but also affect the electrical and optical properties. However, a fundamental and decent quantum-mechanical theory of dislocation remains undiscovered for decades. Here we present an exact and manageable Hamiltonian theory for both edge and screw dislocation line in an isotropic media, where the effective Hamiltonian of a single dislocation line can be written in a harmonic-oscillator-like form, with closed-form quantized 1D phonon-like excitation. Moreover a closed-form, position dependent electron-dislocation coupling strength is obtained, from which we obtained good agreement of relaxation time when comparing with classical results. This Hamiltonian provides a platform to study the effect of dislocation to materials' non-mechanical properties from a fundamental Hamiltonian level.
International Nuclear Information System (INIS)
Moufekkir, F.; Moussaoui, M.A.; Mezrhab, A.; Naji, H.; Lemonnier, D.
2012-01-01
This paper deals with the numerical solution for natural convection and volumetric radiation in an isotropic scattering medium within a heated square cavity using a hybrid thermal lattice Boltzmann method (HTLBM). The multiple relaxation time lattice Boltzmann method (MRT-LBM) has been coupled to the finite difference method (FDM) to solve momentum and energy equations, while the discrete ordinates method (DOM) has been adopted to solve the radiative transfer equation (RTE) using the S8 quadrature. Based on these approaches, the effects of various influencing parameters such as the Rayleigh number (Ra), the wall emissivity (ε ι ), the Planck number (Pl), and the scattering albedo (ω), have been considered. The results presented in terms of isotherms, streamlines and averaged Nusselt number, show that in absence of radiation, the temperature and the flow fields are centro-symmetrics and the cavity core is thermally stratified. However, radiation causes an overall increase in the temperature and velocity gradients along both thermally active walls. The maximum heat transfer rate is obtained when the surfaces of the enclosure walls are regarded as blackbodies. It is also seen that the scattering medium can generate a multicellular flow.
International Nuclear Information System (INIS)
Mueller, D.W.; Crosbie, A.L.
2005-01-01
The topic of this work is the generalized X- and Y-functions of multidimensional radiative transfer. The physical problem considered is spatially varying, collimated radiation incident on the upper boundary of an isotropically scattering, plane-parallel medium. An integral transform is used to reduce the three-dimensional transport equation to a one-dimensional form, and a modified Ambarzumian's method is used to derive coupled, integro-differential equations for the source functions at the boundaries of the medium. The resulting equations are said to be in double-integral form because the integration is over both angular variables. Numerical results are presented to illustrate the computational characteristics of the formulation
Calculation of the radiance distribution at the boundary of an isotropically scattering slab
Doosje, M; Hoenders, B.J; Rinzema, K.
The radiance arising from an anisotropically scattering illuminated stack of n slabs is calulated using the equation of radiative transfer. It appears to be unnecessary to calculate the radiance inside the material; including only the radiance at the boundary surfaces is sufficient to obtain the
Energy Technology Data Exchange (ETDEWEB)
Jonsson, Jacob C.; Branden, Henrik
2006-10-19
This paper demonstrates a method to determine thebidirectional transfer distribution function (BTDF) using an integratingsphere. Information about the sample's angle dependent scattering isobtained by making transmittance measurements with the sample atdifferent distances from the integrating sphere. Knowledge about theilluminated area of the sample and the geometry of the sphere port incombination with the measured data combines to an system of equationsthat includes the angle dependent transmittance. The resulting system ofequations is an ill-posed problem which rarely gives a physical solution.A solvable system is obtained by using Tikhonov regularization on theill-posed problem. The solution to this system can then be used to obtainthe BTDF. Four bulk-scattering samples were characterised using both twogoniophotometers and the described method to verify the validity of thenew method. The agreement shown is great for the more diffuse samples.The solution to the low-scattering samples contains unphysicaloscillations, butstill gives the correct shape of the solution. Theorigin of the oscillations and why they are more prominent inlow-scattering samples are discussed.
Angle-domain inverse scattering migration/inversion in isotropic media
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad
2015-01-01
Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.
International Nuclear Information System (INIS)
Namjoo, A.; Sarvari, S.M. Hosseini; Behzadmehr, A.; Mansouri, S.H.
2009-01-01
In this paper, an inverse analysis is performed for estimation of source term distribution from the measured exit radiation intensities at the boundary surfaces in a one-dimensional absorbing, emitting and isotropically scattering medium between two parallel plates with variable refractive index. The variation of refractive index is assumed to be linear. The radiative transfer equation is solved by the constant quadrature discrete ordinate method. The inverse problem is formulated as an optimization problem for minimizing an objective function which is expressed as the sum of square deviations between measured and estimated exit radiation intensities at boundary surfaces. The conjugate gradient method is used to solve the inverse problem through an iterative procedure. The effects of various variables on source estimation are investigated such as type of source function, errors in the measured data and system parameters, gradient of refractive index across the medium, optical thickness, single scattering albedo and boundary emissivities. The results show that in the case of noisy input data, variation of system parameters may affect the inverse solution, especially at high error values in the measured data. The error in measured data plays more important role than the error in radiative system parameters except the refractive index distribution; however the accuracy of source estimation is very sensitive toward error in refractive index distribution. Therefore, refractive index distribution and measured exit intensities should be measured accurately with a limited error bound, in order to have an accurate estimation of source term in a graded index medium.
International Nuclear Information System (INIS)
Czachor, A.; Al-Wahsh, H.
1999-01-01
Complete text of publication follows. To determine the neutron inelastic coherent scattering (MS) cross section for disordered magnets a system of equations of motion for the Green functions (GF) related to the localized-spin correlation-functions, has been exploited. The higher-order Green functions are decoupled using a symmetric 'equal access' (EA) form of the RPA decoupling scheme. The quasi-crystal approximation (QCA) was applied to construct the space-time Fourier transformed GF Q (ω)> related to neutron scattering. On assuming isotropy of the magnetic structure and a short range coupling between the spins (on the sphere approximation, OSA) we have found an explicit analytic form of this function. Poles of the Q (ω)> determine the dispersion relation ω = ω Q for elementary excitations, such as they are seen in the MS experiment - the positions of the MS profile maxima in the ω-Q space. Single formula for the dispersion relations derived here covers a variety of isotropic spin structures: in particular disordered 'longitudinal' ferrornagnets (ω ∼Q z , Q→ 0), disordered 'transverse' spin structures (ω ∼Q, Q→0), and some intermediate cases. For the system of spins coupled identically - the magnetization and the magnetic susceptibility calculated within the present EA-RPA approach do agree with the results of exact calculations. It provides an interesting insight into the nature of the RPA approach do agree with the results of exact calculations. It provides an interesting insight into the nature of the RPA - treatment of the localized spin dynamics. (author)
Directory of Open Access Journals (Sweden)
JONG WOON KIM
2014-04-01
In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.
Directory of Open Access Journals (Sweden)
Daqing Piao
2014-12-01
Full Text Available Recent focused Monte Carlo and experimental studies on steady-state single-fiber reflectance spectroscopy (SfRS from a biologically relevant scattering medium have revealed that, as the dimensionless reduced scattering of the medium increases, the SfRS intensity increases monotonically until reaching a plateau. The SfRS signal is semi-empirically decomposed to the product of three contributing factors, including a ratio-of-remission (RoR term that refers to the ratio of photons remitting from the medium and crossing the fiber-medium interface over the total number of photons launched into the medium. The RoR is expressed with respect to the dimensionless reduced scattering parameter , where is the reduced scattering coefficient of the medium and is the diameter of the probing fiber. We develop in this work, under the assumption of an isotropic-scattering medium, a method of analytical treatment that will indicate the pattern of RoR as a function of the dimensionless reduced scattering of the medium. The RoR is derived in four cases, corresponding to in-medium (applied to interstitial probing of biological tissue or surface-based (applied to contact-probing of biological tissue SfRS measurements using straight-polished or angle-polished fiber. The analytically arrived surface-probing RoR corresponding to single-fiber probing using a 15° angle-polished fiber over the range of agrees with previously reported similarly configured experimental measurement from a scattering medium that has a Henyey–Greenstein scattering phase function with an anisotropy factor of 0.8. In cases of a medium scattering light anisotropically, we propose how the treatment may be furthered to account for the scattering anisotropy using the result of a study of light scattering close to the point-of-entry by Vitkin et al. (Nat. Commun. 2011, doi:10.1038/ncomms1599.
Anisotropic kernel p(μ → μ') for transport calculations of elastically scattered neutrons
International Nuclear Information System (INIS)
Stevenson, B.
1985-01-01
Literature in the area of anisotropic neutron scattering is by no means lacking. Attention, however, is usually devoted to solution of some particular neutron transport problem and the model employed is at best approximate. The present approach to the problem in general is classically exact and may be of some particular value to individuals seeking exact numerical results in transport calculations. For attempts neutrons originally directed toward the unit vector Omega, it attempts the evaluation of p(theta'), defined such that p(theta') d theta' is that fraction of scattered neutrons that emerges in the vicinity of a cone i.e., having been scattered to between angles theta' and theta' + d theta' with the axis of preferred orientation i; Omega makes an angle theta with i. The relative simplicity of the final form of the solution for hydrogen, in spite of the complicated nature of the limits involved, is a trade-off that truly is not necessary. The exact general solution presented here in integral form, has exceedingly simple limits, i.e., 0 ≤ theta' ≤ π regardless of the material involved; but the form of the final solution is extraordinarily complicated
International Nuclear Information System (INIS)
Kitsos, S.; Assad, A.; Diop, C.M.; Nimal, J.C.
1994-01-01
Exposure and energy absorption buildup factors for aluminum, iron, lead, and water are calculated by the SNID discrete ordinates code for an isotropic point source in a homogeneous medium. The calculation of the buildup factors takes into account the effects of both bound-electron Compton (incoherent) and coherent (Rayleigh) scattering. A comparison with buildup factors from the literature shows that these two effects greatly increase the buildup factors for energies below a few hundred kilo-electron-volts, and thus the new results are improved relative to the experiment. This greater accuracy is due to the increase in the linear attenuation coefficient, which leads to the calculation of the buildup factors for a mean free path with a smaller shield thickness. On the other hand, for the same shield thickness, exposure increases when only incoherent scattering is included and decreases when only coherent scattering is included, so that the exposure finally decreases when both effects are included. Great care must also be taken when checking the approximations for gamma-ray deep-penetration transport calculations, as well as for the cross-section treatment and origin
International Nuclear Information System (INIS)
Chandran, Benjamin D. G.
2000-01-01
Theoretical studies of magnetohydrodynamic (MHD) turbulence and observations of solar wind fluctuations suggest that MHD turbulence in the interstellar medium is anisotropic at small scales, with smooth variations along the background magnetic field and sharp variations perpendicular to the background field. Turbulence with this anisotropy is inefficient at scattering cosmic rays, and thus the scattering rate ν may be smaller than has been traditionally assumed in diffusion models of Galactic cosmic-ray propagation, at least for cosmic-ray energies E above 1011-1012 eV at which self-confinement is not possible. In this paper, it is shown that Galactic cosmic rays can be effectively confined through magnetic reflection by molecular clouds, even when turbulent scattering is weak. Elmegreen's quasi-fractal model of molecular-cloud structure is used to argue that a typical magnetic field line passes through a molecular cloud complex once every ∼300 pc. Once inside the complex, the field line will in most cases be focused into one or more dense clumps in which the magnetic field can be much stronger than the average field in the intercloud medium (ICM). Cosmic rays following field lines into cloud complexes are most often magnetically reflected back into the ICM, since strong-field regions act as magnetic mirrors. For a broad range of cosmic-ray energies, a cosmic ray initially following some particular field line separates from that field line sufficiently slowly that the cosmic ray can be trapped between neighboring cloud complexes for long periods of time. The suppression of cosmic-ray diffusion due to magnetic trapping is calculated in this paper with the use of phenomenological arguments, asymptotic analysis, and Monte Carlo particle simulations. Formulas for the coefficient of diffusion perpendicular to the Galactic disk are derived for several different parameter regimes within the E-ν plane. In one of these parameter regimes in which scattering is weak, it
Energy Technology Data Exchange (ETDEWEB)
Valente, Mauro [CONICET - Consejo Nacional de Investigaciones Cientificas y Tecnicas de La Republica Argentina (Conicet), Buenos Aires, AR (Brazil); Botta, Francesca; Pedroli, Guido [European Institute of Oncology, Milan (Italy). Medical Physics Department; Perez, Pedro, E-mail: valente@famaf.unc.edu.ar [Universidad Nacional de Cordoba, Cordoba (Argentina). Fac. de Matematica, Astronomia y Fisica (FaMAF)
2012-07-01
Beta-emitters have proved to be appropriate for radioimmunotherapy. The dosimetric characterization of each radionuclide has to be carefully investigated. One usual and practical dosimetric approach is the calculation of dose distribution from a unit point source emitting particles according to any radionuclide of interest, which is known as dose point kernel. Absorbed dose distributions are due to primary and radiation scattering contributions. This work presented a method capable of performing dose distributions for nuclear medicine dosimetry by means of Monte Carlo methods. Dedicated subroutines have been developed in order to separately compute primary and scattering contributions to the total absorbed dose, performing particle transport up to 1 keV or least. Preliminarily, the suitability of the calculation method has been satisfactory, being tested for monoenergetic sources, and it was further applied to the characterization of different beta-minus radionuclides of nuclear medicine interests for radioimmunotherapy. (author)
Remizovich, V. S.
2010-06-01
It is commonly accepted that the Schwarzschild-Schuster two-flux approximation (1905, 1914) can be employed only for the calculation of the energy characteristics of the radiation field (energy density and energy flux density) and cannot be used to characterize the angular distribution of radiation field. However, such an inference is not valid. In several cases, one can calculate the radiation intensity inside matter and the reflected radiation with the aid of this simplest approximation in the transport theory. In this work, we use the results of the simplest one-parameter variant of the two-flux approximation to calculate the angular distribution (reflection function) of the radiation reflected by a semi-infinite isotropically scattering dissipative medium when a relatively broad beam is incident on the medium at an arbitrary angle relative to the surface. We do not employ the invariance principle and demonstrate that the reflection function exhibits the multiplicative property. It can be represented as a product of three functions: the reflection function corresponding to the single scattering and two identical h functions, which have the same physical meaning as the Ambartsumyan-Chandrasekhar function ( H) has. This circumstance allows a relatively easy derivation of simple analytical expressions for the H function, total reflectance, and reflection function. We can easily determine the relative contribution of the true single scattering in the photon backscattering at an arbitrary probability of photon survival Λ. We compare all of the parameters of the backscattered radiation with the data resulting from the calculations using the exact theory of Ambartsumyan, Chandrasekhar, et al., which was developed decades after the two-flux approximation. Thus, we avoid the application of fine mathematical methods (the Wiener-Hopf method, the Case method of singular functions, etc.) and obtain simple analytical expressions for the parameters of the scattered radiation
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge; Schuster, Gerard T.
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently
Djebbi, Ramzi
2013-08-19
Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it\\'s kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.
Djebbi, Ramzi; Alkhalifah, Tariq Ali
2013-01-01
Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it's kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.
The isotropic radio background revisited
Energy Technology Data Exchange (ETDEWEB)
Fornengo, Nicolao; Regis, Marco [Dipartimento di Fisica Teorica, Università di Torino, via P. Giuria 1, I–10125 Torino (Italy); Lineros, Roberto A. [Instituto de Física Corpuscular – CSIC/U. Valencia, Parc Científic, calle Catedrático José Beltrán, 2, E-46980 Paterna (Spain); Taoso, Marco, E-mail: fornengo@to.infn.it, E-mail: rlineros@ific.uv.es, E-mail: regis@to.infn.it, E-mail: taoso@cea.fr [Institut de Physique Théorique, CEA/Saclay, F-91191 Gif-sur-Yvette Cédex (France)
2014-04-01
We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky.
The isotropic radio background revisited
International Nuclear Information System (INIS)
Fornengo, Nicolao; Regis, Marco; Lineros, Roberto A.; Taoso, Marco
2014-01-01
We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky
Interactively variable isotropic resolution in computed tomography
International Nuclear Information System (INIS)
Lapp, Robert M; Kyriakou, Yiannis; Kachelriess, Marc; Wilharm, Sylvia; Kalender, Willi A
2008-01-01
An individual balancing between spatial resolution and image noise is necessary to fulfil the diagnostic requirements in medical CT imaging. In order to change influencing parameters, such as reconstruction kernel or effective slice thickness, additional raw-data-dependent image reconstructions have to be performed. Therefore, the noise versus resolution trade-off is time consuming and not interactively applicable. Furthermore, isotropic resolution, expressed by an equivalent point spread function (PSF) in every spatial direction, is important for the undistorted visualization and quantitative evaluation of small structures independent of the viewing plane. Theoretically, isotropic resolution can be obtained by matching the in-plane and through-plane resolution with the aforementioned parameters. Practically, however, the user is not assisted in doing so by current reconstruction systems and therefore isotropic resolution is not commonly achieved, in particular not at the desired resolution level. In this paper, an integrated approach is presented for equalizing the in-plane and through-plane spatial resolution by image filtering. The required filter kernels are calculated from previously measured PSFs in x/y- and z-direction. The concepts derived are combined with a variable resolution filtering technique. Both approaches are independent of CT raw data and operate only on reconstructed images which allows for their application in real time. Thereby, the aim of interactively variable, isotropic resolution is achieved. Results were evaluated quantitatively by measuring PSFs and image noise, and qualitatively by comparing the images to direct reconstructions regarded as the gold standard. Filtered images matched direct reconstructions with arbitrary reconstruction kernels with standard deviations in difference images of typically between 1 and 17 HU. Isotropic resolution was achieved within 5% of the selected resolution level. Processing times of 20-100 ms per frame
Energy Technology Data Exchange (ETDEWEB)
Whetstone, Zachary D., E-mail: zacwhets@umich.edu; Flaska, Marek, E-mail: mflaska@umich.edu; Kearfott, Kimberlee J., E-mail: kearfott@umich.edu
2016-08-11
An experiment was performed to determine the neutron energy of near-monoergetic deuterium–deuterium (D–D) neutrons that elastically scatter in a hydrogenous target. The experiment used two liquid scintillators to perform time of flight (TOF) measurements to determine neutron energy, with the start detector also serving as the scatter target. The stop detector was placed 1.0 m away and at scatter angles of π/6, π/4, and π/3 rad, and 1.5 m at a scatter angle of π/4 rad. When discrete 1 ns increments were implemented, the TOF peaks had estimated errors between −21.2 and 3.6% relative to their expected locations. Full widths at half-maximum (FWHM) ranged between 9.6 and 20.9 ns, or approximately 0.56–0.66 MeV. Monte Carlo simulations were also conducted that approximated the experimental setup and had both D–D and deuterium–tritium (DT) neutrons. The simulated results had errors between −17.2 and 0.0% relative to their expected TOF peaks when 1 ns increments were applied. The largest D–D and D–T FWHMs were 26.7 and 13.7 ns, or approximately 0.85 and 4.98 MeV, respectively. These values, however, can be reduced through manipulation of the dimensions of the system components. The results encourage further study of the neutron elastic scatter TOF system with particular interest in application to active neutron interrogation to search for conventional explosives.
International Nuclear Information System (INIS)
Dixon, Robert L.; Boone, John M.
2011-01-01
Purpose: Knowledge of the complete axial dose profile f(z), including its long scatter tails, provides the most complete (and flexible) description of the accumulated dose in CT scanning. The CTDI paradigm (including CTDI vol ) requires shift-invariance along z (identical dose profiles spaced at equal intervals), and is therefore inapplicable to many of the new and complex shift-variant scan protocols, e.g., high dose perfusion studies using variable (or zero) pitch. In this work, a convolution-based beam model developed by Dixon et al.[Med. Phys. 32, 3712-3728, (2005)] updated with a scatter LSF kernel (or DSF) derived from a Monte Carlo simulation by Boone [Med. Phys. 36, 4547-4554 (2009)] is used to create an analytical equation for the axial dose profile f(z) in a cylindrical phantom. Using f(z), equations are derived which provide the analytical description of conventional (axial and helical) dose, demonstrating its physical underpinnings; and likewise for the peak axial dose f(0) appropriate to stationary phantom cone beam CT, (SCBCT). The methodology can also be applied to dose calculations in shift-variant scan protocols. This paper is an extension of our recent work Dixon and Boone [Med. Phys. 37, 2703-2718 (2010)], which dealt only with the properties of the peak dose f(0), its relationship to CTDI, and its appropriateness to SCBCT. Methods: The experimental beam profile data f(z) of Mori et al.[Med. Phys. 32, 1061-1069 (2005)] from a 256 channel prototype cone beam scanner for beam widths (apertures) ranging from a = 28 to 138 mm are used to corroborate the theoretical axial profiles in a 32 cm PMMA body phantom. Results: The theoretical functions f(z) closely-matched the central axis experimental profile data 11 for all apertures (a = 28 -138 mm). Integration of f(z) likewise yields analytical equations for all the (CTDI-based) dosimetric quantities of conventional CT (including CTDI L itself) in addition to the peak dose f(0) relevant to SCBCT
El-Kader, M. S. A.; Godet, J.-L.; Gustafsson, M.; Maroulis, G.
2018-04-01
Quantum mechanical lineshapes of collision-induced absorption (CIA), collision-induced light scattering (CILS) and collision-induced hyper-Rayleigh scattering (CIHR) at room temperature (295 K) are computed for gaseous mixtures of molecular hydrogen with neon, krypton and xenon. The induced spectra are detected using theoretical values for induced dipole moment, pair-polarizability trace and anisotropy, hyper-polarizability and updated intermolecular potentials. Good agreement is observed for all spectra when the literature and the present potentials which are constructed from the transport and thermo-physical properties are used.
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.
Anisotropic hydrodynamics with a scalar collisional kernel
Almaalol, Dekrayat; Strickland, Michael
2018-04-01
Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.
Multi-parameter Analysis and Inversion for Anisotropic Media Using the Scattering Integral Method
Djebbi, Ramzi
2017-10-24
The main goal in seismic exploration is to identify locations of hydrocarbons reservoirs and give insights on where to drill new wells. Therefore, estimating an Earth model that represents the right physics of the Earth\\'s subsurface is crucial in identifying these targets. Recent seismic data, with long offsets and wide azimuth features, are more sensitive to anisotropy. Accordingly, multiple anisotropic parameters need to be extracted from the recorded data on the surface to properly describe the model. I study the prospect of applying a scattering integral approach for multi-parameter inversion for a transversely isotropic model with a vertical axis of symmetry. I mainly analyze the sensitivity kernels to understand the sensitivity of seismic data to anisotropy parameters. Then, I use a frequency domain scattering integral approach to invert for the optimal parameterization. The scattering integral approach is based on the explicit computation of the sensitivity kernels. I present a new method to compute the traveltime sensitivity kernels for wave equation tomography using the unwrapped phase. I show that the new kernels are a better alternative to conventional cross-correlation/Rytov kernels. I also derive and analyze the sensitivity kernels for a transversely isotropic model with a vertical axis of symmetry. The kernels structure, for various opening/scattering angles, highlights the trade-off regions between the parameters. For a surface recorded data, I show that the normal move-out velocity vn, ƞ and δ parameterization is suitable for a simultaneous inversion of diving waves and reflections. Moreover, when seismic data is inverted hierarchically, the horizontal velocity vh, ƞ and ϵ is the parameterization with the least trade-off. In the frequency domain, the hierarchical inversion approach is naturally implemented using frequency continuation, which makes vh, ƞ and ϵ parameterization attractive. I formulate the multi-parameter inversion using the
International Nuclear Information System (INIS)
Broome, J.
1965-11-01
The programme SCATTER is a KDF9 programme in the Egtran dialect of Fortran to generate normalized angular distributions for elastically scattered neutrons from data input as the coefficients of a Legendre polynomial series, or from differential cross-section data. Also, differential cross-section data may be analysed to produce Legendre polynomial coefficients. Output on cards punched in the format of the U.K. A. E. A. Nuclear Data Library is optional. (author)
Uranium kernel formation via internal gelation
International Nuclear Information System (INIS)
Hunt, R.D.; Collins, J.L.
2004-01-01
In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)
Isotropic oscillator: spheroidal wave functions
International Nuclear Information System (INIS)
Mardoyan, L.G.; Pogosyan, G.S.; Ter-Antonyan, V.M.; Sisakyan, A.N.
1985-01-01
Solutions of the Schroedinger equation are found for an isotropic oscillator (10) in prolate and oblate spheroidal coordinates. It is shown that the obtained solutions turn into spherical and cylindrical bases of the isotropic oscillator at R→0 and R→ infinity (R is the dimensional parameter entering into the definition of prolate and oblate spheroidal coordinates). The explicit form is given for both prolate and oblate basis of the isotropic oscillator for the lowest quantum states
A relationship between Gel'fand-Levitan and Marchenko kernels
International Nuclear Information System (INIS)
Kirst, T.; Von Geramb, H.V.; Amos, K.A.
1989-01-01
An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs
International Nuclear Information System (INIS)
Raine, D.J.
1981-01-01
This introduction to contemporary ideas in cosmology differs from other books on the 'expanding Universe' in its emphasis on physical cosmology and on the physical basis of the general theory of relativity. It is considered that the remarkable degree of isotropy, rather than the expansion, can be regarded as the central observational feature of the Universe. The various theories and ideas in 'big-bang' cosmology are discussed, providing an insight into current problems. Chapter headings are: quality of matter; expanding Universe; quality of radiation; quantity of matter; general theory of relativity; cosmological models; cosmological tests; matter and radiation; limits of isotropy; why is the Universe isotropic; singularities; evolution of structure. (U.K.)
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2016-01-01
To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...
Uusijärvi, Helena; Chouin, Nicolas; Bernhardt, Peter; Ferrer, Ludovic; Bardiès, Manuel; Forssell-Aronsson, Eva
2009-08-01
Point kernels describe the energy deposited at a certain distance from an isotropic point source and are useful for nuclear medicine dosimetry. They can be used for absorbed-dose calculations for sources of various shapes and are also a useful tool when comparing different Monte Carlo (MC) codes. The aim of this study was to compare point kernels calculated by using the mixed MC code, PENELOPE (v. 2006), with point kernels calculated by using the condensed-history MC codes, ETRAN, GEANT4 (v. 8.2), and MCNPX (v. 2.5.0). Point kernels for electrons with initial energies of 10, 100, 500, and 1 MeV were simulated with PENELOPE. Spherical shells were placed around an isotropic point source at distances from 0 to 1.2 times the continuous-slowing-down-approximation range (R(CSDA)). Detailed (event-by-event) simulations were performed for electrons with initial energies of less than 1 MeV. For 1-MeV electrons, multiple scattering was included for energy losses less than 10 keV. Energy losses greater than 10 keV were simulated in a detailed way. The point kernels generated were used to calculate cellular S-values for monoenergetic electron sources. The point kernels obtained by using PENELOPE and ETRAN were also used to calculate cellular S-values for the high-energy beta-emitter, 90Y, the medium-energy beta-emitter, 177Lu, and the low-energy electron emitter, 103mRh. These S-values were also compared with the Medical Internal Radiation Dose (MIRD) cellular S-values. The greatest differences between the point kernels (mean difference calculated for distances, electrons was 1.4%, 2.5%, and 6.9% for ETRAN, GEANT4, and MCNPX, respectively, compared to PENELOPE, if omitting the S-values when the activity was distributed on the cell surface for 10-keV electrons. The largest difference between the cellular S-values for the radionuclides, between PENELOPE and ETRAN, was seen for 177Lu (1.2%). There were large differences between the MIRD cellular S-values and those obtained from
Neutron Transport in Finite Random Media with Pure-Triplet Scattering
International Nuclear Information System (INIS)
Sallaha, M.; Hendi, A.A.
2008-01-01
The solution of the one-speed neutron transport equation in a finite slab random medium with pure-triplet anisotropic scattering is studied. The stochastic medium is assumed to consist of two randomly mixed immiscible fluids. The cross section and the scattering kernel are treated as discrete random variables, which obey the same statistics as Markovian processes and exponential chord length statistics. The medium boundaries are considered to have specular reflectivities with angular-dependent externally incident flux. The deterministic solution is obtained by using Pomraning-Eddington approximation. Numerical results are calculated for the average reflectivity and average transmissivity for different values of the single scattering albedo and varying the parameters which characterize the random medium. Compared to the results obtained by Adams et al. in case of isotropic scattering that based on the Monte Carlo technique, it can be seen that we have good comparable data
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...
Collision kernels in the eikonal approximation for Lennard-Jones interaction potential
International Nuclear Information System (INIS)
Zielinska, S.
1985-03-01
The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)
Validation of Born Traveltime Kernels
Baig, A. M.; Dahlen, F. A.; Hung, S.
2001-12-01
Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.
Energy Technology Data Exchange (ETDEWEB)
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Generation of gamma-ray streaming kernels through cylindrical ducts via Monte Carlo method
International Nuclear Information System (INIS)
Kim, Dong Su
1992-02-01
Since radiation streaming through penetrations is often the critical consideration in protection against exposure of personnel in a nuclear facility, it has been of great concern in radiation shielding design and analysis. Several methods have been developed and applied to the analysis of the radiation streaming in the past such as ray analysis method, single scattering method, albedo method, and Monte Carlo method. But they may be used for order-of-magnitude calculations and where sufficient margin is available, except for the Monte Carlo method which is accurate but requires a lot of computing time. This study developed a Monte Carlo method and constructed a data library of solutions using the Monte Carlo method for radiation streaming through a straight cylindrical duct in concrete walls of a broad, mono-directional, monoenergetic gamma-ray beam of unit intensity. The solution named as plane streaming kernel is the average dose rate at duct outlet and was evaluated for 20 source energies from 0 to 10 MeV, 36 source incident angles from 0 to 70 degrees, 5 duct radii from 10 to 30 cm, and 16 wall thicknesses from 0 to 100 cm. It was demonstrated that average dose rate due to an isotropic point source at arbitrary positions can be well approximated using the plane streaming kernel with acceptable error. Thus, the library of the plane streaming kernels can be used for the accurate and efficient analysis of radiation streaming through a straight cylindrical duct in concrete walls due to arbitrary distributions of gamma-ray sources
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Gärtner, Thomas
2009-01-01
This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by
Locally linear approximation for Kernel methods : the Railway Kernel
Muñoz, Alberto; González, Javier
2008-01-01
In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...
Isotropic and anisotropic surface wave cloaking techniques
International Nuclear Information System (INIS)
McManus, T M; Spada, L La; Hao, Y
2016-01-01
In this paper we compare two different approaches for surface waves cloaking. The first technique is a unique application of Fermat’s principle and requires isotropic material properties, but owing to its derivation is limited in its applicability. The second technique utilises a geometrical optics approximation for dealing with rays bound to a two dimensional surface and requires anisotropic material properties, though it can be used to cloak any smooth surface. We analytically derive the surface wave scattering behaviour for both cloak techniques when applied to a rotationally symmetric surface deformation. Furthermore, we simulate both using a commercially available full-wave electromagnetic solver and demonstrate a good level of agreement with their analytically derived solutions. Our analytical solutions and simulations provide a complete and concise overview of two different surface wave cloaking techniques. (paper)
Isotropic and anisotropic surface wave cloaking techniques
McManus, T. M.; La Spada, L.; Hao, Y.
2016-04-01
In this paper we compare two different approaches for surface waves cloaking. The first technique is a unique application of Fermat’s principle and requires isotropic material properties, but owing to its derivation is limited in its applicability. The second technique utilises a geometrical optics approximation for dealing with rays bound to a two dimensional surface and requires anisotropic material properties, though it can be used to cloak any smooth surface. We analytically derive the surface wave scattering behaviour for both cloak techniques when applied to a rotationally symmetric surface deformation. Furthermore, we simulate both using a commercially available full-wave electromagnetic solver and demonstrate a good level of agreement with their analytically derived solutions. Our analytical solutions and simulations provide a complete and concise overview of two different surface wave cloaking techniques.
Motai, Yuichi
2015-01-01
Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include
Ellipsoidal basis for isotropic oscillator
International Nuclear Information System (INIS)
Kallies, W.; Lukac, I.; Pogosyan, G.S.; Sisakyan, A.N.
1994-01-01
The solutions of the Schroedinger equation are derived for the isotropic oscillator potential in the ellipsoidal coordinate system. The explicit expression is obtained for the ellipsoidal integrals of motion through the components of the orbital moment and Demkov's tensor. The explicit form of the ellipsoidal basis is given for the lowest quantum numbers. 10 refs.; 1 tab. (author)
Fabrication and Characterization of Surrogate TRISO Particles Using 800μm ZrO_{2} Kernels
Energy Technology Data Exchange (ETDEWEB)
Jolly, Brian C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Helmreich, Grant [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cooley, Kevin M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dyer, John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Terrani, Kurt [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-07-01
In support of fully ceramic microencapsulated (FCM) fuel development, coating development work is ongoing at Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with both UN kernels and surrogate (uranium-free) kernels. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere. The surrogate TRISO particles are necessary for separate effects testing and for utilization in the consolidation process development. This report focuses on the fabrication and characterization of surrogate TRISO particles which use 800μm in diameter ZrO_{2} microspheres as the kernel.
Thermochemical equilibrium in a kernel of a UN TRISO coated fuel particle
International Nuclear Information System (INIS)
Kim, Young Min; Jo, C. K.; Lim, H. S.; Cho, M. S.; Lee, W. J.
2012-01-01
A coated fuel particle (CFP) with a uranium mononitride (UN) kernel has been recently considered as an advanced fuel option, such as in fully ceramic micro encapsulated (FCM) replacement fuel for light water reactors (LWRs). In FCM fuel, a large number of tri isotropic coated fuel particles (TRISOs) are embedded in a silicon carbide (SiC) matrix. Thermochemical equilibrium calculations can predict the chemical behaviors of a kernel in a TRISO of FCM fuel during irradiation. They give information on the kind and quantity of gases generated in a kernel during irradiation. This study treats the quantitative analysis of thermochemical equilibrium in a UN TRISO of FCM LWR fuel using HSC software
Pan, Wenyong; Geng, Yu; Innanen, Kristopher A.
2018-05-01
The problem of inverting for multiple physical parameters in the subsurface using seismic full-waveform inversion (FWI) is complicated by interparameter trade-off arising from inherent ambiguities between different physical parameters. Parameter resolution is often characterized using scattering radiation patterns, but these neglect some important aspects of interparameter trade-off. More general analysis and mitigation of interparameter trade-off in isotropic-elastic FWI is possible through judiciously chosen multiparameter Hessian matrix-vector products. We show that products of multiparameter Hessian off-diagonal blocks with model perturbation vectors, referred to as interparameter contamination kernels, are central to the approach. We apply the multiparameter Hessian to various vectors designed to provide information regarding the strengths and characteristics of interparameter contamination, both locally and within the whole volume. With numerical experiments, we observe that S-wave velocity perturbations introduce strong contaminations into density and phase-reversed contaminations into P-wave velocity, but themselves experience only limited contaminations from other parameters. Based on these findings, we introduce a novel strategy to mitigate the influence of interparameter trade-off with approximate contamination kernels. Furthermore, we recommend that the local spatial and interparameter trade-off of the inverted models be quantified using extended multiparameter point spread functions (EMPSFs) obtained with pre-conditioned conjugate-gradient algorithm. Compared to traditional point spread functions, the EMPSFs appear to provide more accurate measurements for resolution analysis, by de-blurring the estimations, scaling magnitudes and mitigating interparameter contamination. Approximate eigenvalue volumes constructed with stochastic probing approach are proposed to evaluate the resolution of the inverted models within the whole model. With a synthetic
International Nuclear Information System (INIS)
Sanchez, Richard.
1975-04-01
For the one-dimensional geometries, the transport equation with linearly anisotropic scattering can be reduced to a single integral equation; this is a singular-kernel FREDHOLM equation of the second kind. When applying a conventional projective method that of GALERKIN, to the solution of this equation the well-known collision probability algorithm is obtained. Piecewise polynomial expansions are used to represent the flux. In the ANILINE code, the flux is supposed to be linear in plane geometry and parabolic in both cylindrical and spherical geometries. An integral relationship was found between the one-dimensional isotropic and anisotropic kernels; this allows to reduce the new matrix elements (issuing from the anisotropic kernel) to classic collision probabilities of the isotropic scattering equation. For cylindrical and spherical geometries used an approximate representation of the current was used to avoid an additional numerical integration. Reflective boundary conditions were considered; in plane geometry the reflection is supposed specular, for the other geometries the isotropic reflection hypothesis has been adopted. Further, the ANILINE code enables to deal with an incoming isotropic current. Numerous checks were performed in monokinetic theory. Critical radii and albedos were calculated for homogeneous slabs, cylinders and spheres. For heterogeneous media, the thermal utilization factor obtained by this method was compared with the theoretical result based upon a formula by BENOIST. Finally, ANILINE was incorporated into the multigroup APOLLO code, which enabled to analyse the MINERVA experimental reactor in transport theory with 99 groups. The ANILINE method is particularly suited to the treatment of strongly anisotropic media with considerable flux gradients. It is also well adapted to the calculation of reflectors, and in general, to the exact analysis of anisotropic effects in large-sized media [fr
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger
2009-01-01
and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...
Adaptive metric kernel regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
2000-01-01
Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...
Adaptive Metric Kernel Regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...
Kernel methods for deep learning
Cho, Youngmin
2012-01-01
We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads
2011-01-01
In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...
Spafford, Eugene H.; Mckendry, Martin S.
1986-01-01
An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.
Induced piezoelectricity in isotropic biomaterial.
Zimmerman, R L
1976-01-01
Isotropic material can be made to exhibit piezoelectric effects by the application of a constant electric field. For insulators, the piezoelectric strain constant is proportional to the applied electric field and for semiconductors, an additional out-of-phase component of piezoelectricity is proportional to the electric current density in the sample. The two induced coefficients are proportional to the strain-dependent dielectric constant (depsilon/dS + epsilon) and resistivity (drho/dS - rho), respectively. The latter is more important at frequencies such that rhoepsilonomega less than 1, often the case in biopolymers.Signals from induced piezoelectricity in nature may be larger than those from true piezoelectricity. PMID:990389
How Isotropic is the Universe?
Saadeh, Daniela; Feeney, Stephen M; Pontzen, Andrew; Peiris, Hiranya V; McEwen, Jason D
2016-09-23
A fundamental assumption in the standard model of cosmology is that the Universe is isotropic on large scales. Breaking this assumption leads to a set of solutions to Einstein's field equations, known as Bianchi cosmologies, only a subset of which have ever been tested against data. For the first time, we consider all degrees of freedom in these solutions to conduct a general test of isotropy using cosmic microwave background temperature and polarization data from Planck. For the vector mode (associated with vorticity), we obtain a limit on the anisotropic expansion of (σ_{V}/H)_{0}Universe is strongly disfavored, with odds of 121 000:1 against.
Viscosity kernel of molecular fluids
DEFF Research Database (Denmark)
Puscasu, Ruslan; Todd, Billy; Daivis, Peter
2010-01-01
, temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...
Isotropic Negative Thermal Expansion Metamaterials.
Wu, Lingling; Li, Bo; Zhou, Ji
2016-07-13
Negative thermal expansion materials are important and desirable in science and engineering applications. However, natural materials with isotropic negative thermal expansion are rare and usually unsatisfied in performance. Here, we propose a novel method to achieve two- and three-dimensional negative thermal expansion metamaterials via antichiral structures. The two-dimensional metamaterial is constructed with unit cells that combine bimaterial strips and antichiral structures, while the three-dimensional metamaterial is fabricated by a multimaterial 3D printing process. Both experimental and simulation results display isotropic negative thermal expansion property of the samples. The effective coefficient of negative thermal expansion of the proposed models is demonstrated to be dependent on the difference between the thermal expansion coefficient of the component materials, as well as on the circular node radius and the ligament length in the antichiral structures. The measured value of the linear negative thermal expansion coefficient of the three-dimensional sample is among the largest achieved in experiments to date. Our findings provide an easy and practical approach to obtaining materials with tunable negative thermal expansion on any scale.
Geometrical considerations in analyzing isotropic or anisotropic surface reflections.
Simonot, Lionel; Obein, Gael
2007-05-10
The bidirectional reflectance distribution function (BRDF) represents the evolution of the reflectance with the directions of incidence and observation. Today BRDF measurements are increasingly applied and have become important to the study of the appearance of surfaces. The representation and the analysis of BRDF data are discussed, and the distortions caused by the traditional representation of the BRDF in a Fourier plane are pointed out and illustrated for two theoretical cases: an isotropic surface and a brushed surface. These considerations will help characterize either the specular peak width of an isotropic rough surface or the main directions of the light scattered by an anisotropic rough surface without misinterpretations. Finally, what is believed to be a new space is suggested for the representation of the BRDF, which avoids the geometrical deformations and in numerous cases is more convenient for BRDF analysis.
Microscopic description of the collisions between nuclei. [Generator coordinate kernels
Energy Technology Data Exchange (ETDEWEB)
Canto, L F; Brink, D M [Oxford Univ. (UK). Dept. of Theoretical Physics
1977-03-21
The equivalence of the generator coordinate method and the resonating group method is used in the derivation of two new methods to describe the scattering of spin-zero fragments. Both these methods use generator coordinate kernels, but avoid the problem of calculating the generator coordinate weight function in the asymptotic region. The scattering of two ..cap alpha..-particles is studied as an illustration.
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Steerability of Hermite Kernel
Czech Academy of Sciences Publication Activity Database
Yang, Bo; Flusser, Jan; Suk, Tomáš
2013-01-01
Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf
Thermalization vs. isotropization and azimuthal fluctuations
International Nuclear Information System (INIS)
Mrowczynski, Stanislaw
2005-01-01
Hydrodynamic description requires a local thermodynamic equilibrium of the system under study but an approximate hydrodynamic behaviour is already manifested when a momentum distribution of liquid components is not of equilibrium form but merely isotropic. While the process of equilibration is relatively slow, the parton system becomes isotropic rather fast due to the plasma instabilities. Azimuthal fluctuations observed in relativistic heavy-ion collisions are argued to distinguish between a fully equilibrated and only isotropic parton system produced in the collision early stage
Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering
AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Al-Harthi, Noha A.; Chen, Rui; Yokota, Rio; Bagci, Hakan; Keyes, David E.
2018-01-01
scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
Retrieval of collision kernels from the change of droplet size distributions with linear inversion
Energy Technology Data Exchange (ETDEWEB)
Onishi, Ryo; Takahashi, Keiko [Earth Simulator Center, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-ku, Yokohama Kanagawa 236-0001 (Japan); Matsuda, Keigo; Kurose, Ryoichi; Komori, Satoru [Department of Mechanical Engineering and Science, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: onishi.ryo@jamstec.go.jp, E-mail: matsuda.keigo@t03.mbox.media.kyoto-u.ac.jp, E-mail: takahasi@jamstec.go.jp, E-mail: kurose@mech.kyoto-u.ac.jp, E-mail: komori@mech.kyoto-u.ac.jp
2008-12-15
We have developed a new simple inversion scheme for retrieving collision kernels from the change of droplet size distribution due to collision growth. Three-dimensional direct numerical simulations (DNS) of steady isotropic turbulence with colliding droplets are carried out in order to investigate the validity of the developed inversion scheme. In the DNS, air turbulence is calculated using a quasi-spectral method; droplet motions are tracked in a Lagrangian manner. The initial droplet size distribution is set to be equivalent to that obtained in a wind tunnel experiment. Collision kernels retrieved by the developed inversion scheme are compared to those obtained by the DNS. The comparison shows that the collision kernels can be retrieved within 15% error. This verifies the feasibility of retrieving collision kernels using the present inversion scheme.
Smolka, Gert
1994-01-01
Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
7 CFR 981.408 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...
7 CFR 981.8 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
Homogenous isotropic invisible cloak based on geometrical optics.
Sun, Jingbo; Zhou, Ji; Kang, Lei
2008-10-27
Invisible cloak derived from the coordinate transformation requires its constitutive material to be anisotropic. In this work, we present a cloak of graded-index isotropic material based on the geometrical optics theory. The cloak is realized by concentric multilayered structure with designed refractive index to achieve the low-scattering and smooth power-flow. Full-wave simulations on such a design of a cylindrical cloak are performed to demonstrate the cloaking ability to incident wave of any polarization. Using normal nature material with isotropy and low absorption, the cloak shows light on a practical path to stealth technology, especially that in the optical range.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
Isotropic blackbody cosmic microwave background radiation as evidence for a homogeneous universe.
Clifton, Timothy; Clarkson, Chris; Bull, Philip
2012-08-03
The question of whether the Universe is spatially homogeneous and isotropic on the largest scales is of fundamental importance to cosmology but has not yet been answered decisively. Surprisingly, neither an isotropic primary cosmic microwave background (CMB) nor combined observations of luminosity distances and galaxy number counts are sufficient to establish such a result. The inclusion of the Sunyaev-Zel'dovich effect in CMB observations, however, dramatically improves this situation. We show that even a solitary observer who sees an isotropic blackbody CMB can conclude that the Universe is homogeneous and isotropic in their causal past when the Sunyaev-Zel'dovich effect is present. Critically, however, the CMB must either be viewed for an extended period of time, or CMB photons that have scattered more than once must be detected. This result provides a theoretical underpinning for testing the cosmological principle with observations of the CMB alone.
Macroscopic simulation of isotropic permanent magnets
International Nuclear Information System (INIS)
Bruckner, Florian; Abert, Claas; Vogler, Christoph; Heinrichs, Frank; Satz, Armin; Ausserlechner, Udo; Binder, Gernot; Koeck, Helmut; Suess, Dieter
2016-01-01
Accurate simulations of isotropic permanent magnets require to take the magnetization process into account and consider the anisotropic, nonlinear, and hysteretic material behaviour near the saturation configuration. An efficient method for the solution of the magnetostatic Maxwell equations including the description of isotropic permanent magnets is presented. The algorithm can easily be implemented on top of existing finite element methods and does not require a full characterization of the hysteresis of the magnetic material. Strayfield measurements of an isotropic permanent magnet and simulation results are in good agreement and highlight the importance of a proper description of the isotropic material. - Highlights: • Simulations of isotropic permanent magnets. • Accurate calculation of remanence magnetization and strayfield. • Comparison with strayfield measurements and anisotropic magnet simulations. • Efficient 3D FEM–BEM coupling for solution of Maxwell equations.
International Nuclear Information System (INIS)
Fernandez, J.E.; Molinari, V.G.; Sumini, M.
1988-01-01
In the frame of the multiple applications of X-ray techniques a detailed description of the photon transport under several boundary conditions in condensed media is of utmost importance. In this work the photon transport equation for a homogeneous specimen of infinite thickness is considered and an exact iterative solution is reported, which is universally valid for all types of interactions because of its independence of the shape of the interaction kernel. As a test probe we use a specially simple elastic scattering expression that renders possible the exact calculation of the first two orders of the solution. It is shown that the second order does not produce any significant improvement over the first one. Due to its particular characteristics, the first-order solution for the simplified kernel can be extended to include the form factor, thus giving a more realistic description of the coherent scattering of monochromatic radiation by bound electrons. The relevant effects of the scattering anisotropy are also placed in evidence when they are constrated with the isotropic solution calculated in the same way. (author) [pt
Empirical isotropic chemical shift surfaces
International Nuclear Information System (INIS)
Czinki, Eszter; Csaszar, Attila G.
2007-01-01
A list of proteins is given for which spatial structures, with a resolution better than 2.5 A, are known from entries in the Protein Data Bank (PDB) and isotropic chemical shift (ICS) values are known from the RefDB database related to the Biological Magnetic Resonance Bank (BMRB) database. The structures chosen provide, with unknown uncertainties, dihedral angles φ and ψ characterizing the backbone structure of the residues. The joint use of experimental ICSs of the same residues within the proteins, again with mostly unknown uncertainties, and ab initio ICS(φ,ψ) surfaces obtained for the model peptides For-(l-Ala) n -NH 2 , with n = 1, 3, and 5, resulted in so-called empirical ICS(φ,ψ) surfaces for all major nuclei of the 20 naturally occurring α-amino acids. Out of the many empirical surfaces determined, it is the 13C α ICS(φ,ψ) surface which seems to be most promising for identifying major secondary structure types, α-helix, β-strand, left-handed helix (α D ), and polyproline-II. Detailed tests suggest that Ala is a good model for many naturally occurring α-amino acids. Two-dimensional empirical 13C α - 1 H α ICS(φ,ψ) correlation plots, obtained so far only from computations on small peptide models, suggest the utility of the experimental information contained therein and thus they should provide useful constraints for structure determinations of proteins
Isotropic stars in general relativity
International Nuclear Information System (INIS)
Mak, M.K.; Harko, T.
2013-01-01
We present a general solution of the Einstein gravitational field equations for the static spherically symmetric gravitational interior space-time of an isotropic fluid sphere. The solution is obtained by transforming the pressure isotropy condition, a second order ordinary differential equation, into a Riccati type first order differential equation, and using a general integrability condition for the Riccati equation. This allows us to obtain an exact non-singular solution of the interior field equations for a fluid sphere, expressed in the form of infinite power series. The physical features of the solution are studied in detail numerically by cutting the infinite series expansions, and restricting our numerical analysis by taking into account only n=21 terms in the power series representations of the relevant astrophysical parameters. In the present model all physical quantities (density, pressure, speed of sound etc.) are finite at the center of the sphere. The physical behavior of the solution essentially depends on the equation of state of the dense matter at the center of the star. The stability properties of the model are also analyzed in detail for a number of central equations of state, and it is shown that it is stable with respect to the radial adiabatic perturbations. The astrophysical analysis indicates that this solution can be used as a realistic model for static general relativistic high density objects, like neutron stars. (orig.)
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Automatic performance tuning of parallel and accelerated seismic imaging kernels
Haberdar, Hakan
2014-01-01
With the increased complexity and diversity of mainstream high performance computing systems, significant effort is required to tune parallel applications in order to achieve the best possible performance for each particular platform. This task becomes more and more challenging and requiring a larger set of skills. Automatic performance tuning is becoming a must for optimizing applications such as Reverse Time Migration (RTM) widely used in seismic imaging for oil and gas exploration. An empirical search based auto-tuning approach is applied to the MPI communication operations of the parallel isotropic and tilted transverse isotropic kernels. The application of auto-tuning using the Abstract Data and Communication Library improved the performance of the MPI communications as well as developer productivity by providing a higher level of abstraction. Keeping productivity in mind, we opted toward pragma based programming for accelerated computation on latest accelerated architectures such as GPUs using the fairly new OpenACC standard. The same auto-tuning approach is also applied to the OpenACC accelerated seismic code for optimizing the compute intensive kernel of the Reverse Time Migration application. The application of such technique resulted in an improved performance of the original code and its ability to adapt to different execution environments.
Ceramography of Irradiated tristructural isotropic (TRISO) Fuel from the AGR-2 Experiment
Energy Technology Data Exchange (ETDEWEB)
Rice, Francine Joyce [Idaho National Lab. (INL), Idaho Falls, ID (United States); Stempien, John Dennis [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-09-01
Ceramography was performed on cross sections from four tristructural isotropic (TRISO) coated particle fuel compacts taken from the AGR-2 experiment, which was irradiated between June 2010 and October 2013 in the Advanced Test Reactor (ATR). The fuel compacts examined in this study contained TRISO-coated particles with either uranium oxide (UO2) kernels or uranium oxide/uranium carbide (UCO) kernels that were irradiated to final burnup values between 9.0 and 11.1% FIMA. These examinations are intended to explore kernel and coating morphology evolution during irradiation. This includes kernel porosity, swelling, and migration, and irradiation-induced coating fracture and separation. Variations in behavior within a specific cross section, which could be related to temperature or burnup gradients within the fuel compact, are also explored. The criteria for categorizing post-irradiation particle morphologies developed for AGR-1 ceramographic exams, was applied to the particles in the AGR-2 compacts particles examined. Results are compared with similar investigations performed as part of the earlier AGR-1 irradiation experiment. This paper presents the results of the AGR-2 examinations and discusses the key implications for fuel irradiation performance.
Gao, Zhiwen; Zhou, Youhe
2015-04-01
Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E-J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss-Lobatto-Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed.
Mixture Density Mercer Kernels: A Method to Learn Kernels
National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...
Djebbi, Ramzi
2014-08-05
Multi-parameter inversion in anisotropic media suffers from the inherent trade-off between the anisotropic parameters, even under the acoustic assumption. Multi-component data, often acquired nowadays in ocean bottom acquisition and land data, provide additional information capable of resolving anisotropic parameters under the acoustic approximation assumption. Based on Born scattering approximation, we develop formulas capable of characterizing the radiation patterns for the acoustic pseudo-pure mode P-waves. Though commonly reserved for the elastic fields, we use displacement fields to constrain the acoustic vertical transverse isotropic (VTI) representation of the medium. Using the asymptotic Green\\'s functions and a horizontal reflector we derive the radiation patterns for perturbations in the anisotropic media. The radiation pattern for the anellipticity parameter η is identically zero for the horizontal displacement. This allows us to dedicate this component to invert for velocity and δ. Computing the traveltime sensitivity kernels based on the unwrapped phase confirms the radiation patterns observations, and provide the model wavenumber behavior of the update.
A kernel version of spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
. Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
kernel oil by lipolytic organisms
African Journals Online (AJOL)
USER
2010-08-02
Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.
Multivariate and semiparametric kernel regression
Härdle, Wolfgang; Müller, Marlene
1997-01-01
The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole E.
The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....
Isotropic Growth of Graphene toward Smoothing Stitching.
Zeng, Mengqi; Tan, Lifang; Wang, Lingxiang; Mendes, Rafael G; Qin, Zhihui; Huang, Yaxin; Zhang, Tao; Fang, Liwen; Zhang, Yanfeng; Yue, Shuanglin; Rümmeli, Mark H; Peng, Lianmao; Liu, Zhongfan; Chen, Shengli; Fu, Lei
2016-07-26
The quality of graphene grown via chemical vapor deposition still has very great disparity with its theoretical property due to the inevitable formation of grain boundaries. The design of single-crystal substrate with an anisotropic twofold symmetry for the unidirectional alignment of graphene seeds would be a promising way for eliminating the grain boundaries at the wafer scale. However, such a delicate process will be easily terminated by the obstruction of defects or impurities. Here we investigated the isotropic growth behavior of graphene single crystals via melting the growth substrate to obtain an amorphous isotropic surface, which will not offer any specific grain orientation induction or preponderant growth rate toward a certain direction in the graphene growth process. The as-obtained graphene grains are isotropically round with mixed edges that exhibit high activity. The orientation of adjacent grains can be easily self-adjusted to smoothly match each other over a liquid catalyst with facile atom delocalization due to the low rotation steric hindrance of the isotropic grains, thus achieving the smoothing stitching of the adjacent graphene. Therefore, the adverse effects of grain boundaries will be eliminated and the excellent transport performance of graphene will be more guaranteed. What is more, such an isotropic growth mode can be extended to other types of layered nanomaterials such as hexagonal boron nitride and transition metal chalcogenides for obtaining large-size intrinsic film with low defect.
Pant, S; Laliberte, J; Martinez, M.J.; Rocha, B.
2016-01-01
In this paper, a one-sided, in situ method based on the time of flight measurement of ultrasonic waves was described. The primary application of this technique was to non-destructively measure the stiffness properties of isotropic and transversely isotropic materials. The method consists of
Influence Function and Robust Variant of Kernel Canonical Correlation Analysis
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2017-01-01
Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...
Geometric Models for Isotropic Random Porous Media: A Review
Directory of Open Access Journals (Sweden)
Helmut Hermann
2014-01-01
Full Text Available Models for random porous media are considered. The models are isotropic both from the local and the macroscopic point of view; that is, the pores have spherical shape or their surface shows piecewise spherical curvature, and there is no macroscopic gradient of any geometrical feature. Both closed-pore and open-pore systems are discussed. The Poisson grain model, the model of hard spheres packing, and the penetrable sphere model are used; variable size distribution of the pores is included. A parameter is introduced which controls the degree of open-porosity. Besides systems built up by a single solid phase, models for porous media with the internal surface coated by a second phase are treated. Volume fraction, surface area, and correlation functions are given explicitly where applicable; otherwise numerical methods for determination are described. Effective medium theory is applied to calculate physical properties for the models such as isotropic elastic moduli, thermal and electrical conductivity, and static dielectric constant. The methods presented are exemplified by applications: small-angle scattering of systems showing fractal-like behavior in limited ranges of linear dimension, optimization of nanoporous insulating materials, and improvement of properties of open-pore systems by atomic layer deposition of a second phase on the internal surface.
Kernel versions of some orthogonal transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Model Selection in Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter
Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...
Texture of low temperature isotropic pyrocarbons
International Nuclear Information System (INIS)
Pelissier, Joseph; Lombard, Louis.
1976-01-01
Isotropic pyrocarbon deposited on fuel particles was studied by transmission electron microscopy in order to determine its texture. The material consists of an agglomerate of spherical growth features similar to those of carbon black. The spherical growth features are formed from the cristallites of turbostratic carbon and the distribution gives an isotropic structure. Neutron irradiation modifies the morphology of the pyrocarbon. The spherical growth features are deformed and the coating becomes strongly anisotropic. The transformation leads to the rupture of the coating caused by strong irradiation doses [fr
Integral equations with contrasting kernels
Directory of Open Access Journals (Sweden)
Theodore Burton
2008-01-01
Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.
Kernel learning algorithms for face recognition
Li, Jun-Bao; Pan, Jeng-Shyang
2013-01-01
Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new
Model selection for Gaussian kernel PCA denoising
DEFF Research Database (Denmark)
Jørgensen, Kasper Winther; Hansen, Lars Kai
2012-01-01
We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...
RTOS kernel in portable electrocardiograph
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
RTOS kernel in portable electrocardiograph
International Nuclear Information System (INIS)
Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A
2011-01-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
DEFF Research Database (Denmark)
Walder, Christian; Henao, Ricardo; Mørup, Morten
We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....
Model selection in kernel ridge regression
DEFF Research Database (Denmark)
Exterkate, Peter
2013-01-01
Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...
Mapping of moveout in tilted transversely isotropic media
Stovas, A.; Alkhalifah, Tariq Ali
2013-01-01
The computation of traveltimes in a transverse isotropic medium with a tilted symmetry axis tilted transversely isotropic is very important both for modelling and inversion. We develop a simple analytical procedure to map the traveltime function from a transverse isotropic medium with a vertical symmetry axis (vertical transversely isotropic) to a tilted transversely isotropic medium by applying point-by-point mapping of the traveltime function. This approach can be used for kinematic modelling and inversion in layered tilted transversely isotropic media. © 2013 European Association of Geoscientists & Engineers.
Mapping of moveout in tilted transversely isotropic media
Stovas, A.
2013-09-09
The computation of traveltimes in a transverse isotropic medium with a tilted symmetry axis tilted transversely isotropic is very important both for modelling and inversion. We develop a simple analytical procedure to map the traveltime function from a transverse isotropic medium with a vertical symmetry axis (vertical transversely isotropic) to a tilted transversely isotropic medium by applying point-by-point mapping of the traveltime function. This approach can be used for kinematic modelling and inversion in layered tilted transversely isotropic media. © 2013 European Association of Geoscientists & Engineers.
Multiple Kernel Learning with Data Augmentation
2016-11-22
JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to
A kernel version of multivariate alteration detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2013-01-01
Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....
Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.
2012-01-01
The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an
Concentric layered Hermite scatterers
Astheimer, Jeffrey P.; Parker, Kevin J.
2018-05-01
The long wavelength limit of scattering from spheres has a rich history in optics, electromagnetics, and acoustics. Recently it was shown that a common integral kernel pertains to formulations of weak spherical scatterers in both acoustics and electromagnetic regimes. Furthermore, the relationship between backscattered amplitude and wavenumber k was shown to follow power laws higher than the Rayleigh scattering k2 power law, when the inhomogeneity had a material composition that conformed to a Gaussian weighted Hermite polynomial. Although this class of scatterers, called Hermite scatterers, are plausible, it may be simpler to manufacture scatterers with a core surrounded by one or more layers. In this case the inhomogeneous material property conforms to a piecewise continuous constant function. We demonstrate that the necessary and sufficient conditions for supra-Rayleigh scattering power laws in this case can be stated simply by considering moments of the inhomogeneous function and its spatial transform. This development opens an additional path for construction of, and use of scatterers with unique power law behavior.
Complex use of cottonseed kernels
Energy Technology Data Exchange (ETDEWEB)
Glushenkova, A I
1977-01-01
A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.
Kernel regression with functional response
Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe
2011-01-01
We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.
GRIM : Leveraging GPUs for Kernel integrity monitoring
Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris
2016-01-01
Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious
Paramecium: An Extensible Object-Based Kernel
van Doorn, L.; Homburg, P.; Tanenbaum, A.S.
1995-01-01
In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection
Local Observed-Score Kernel Equating
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Veto-Consensus Multiple Kernel Learning
Zhou, Y.; Hu, N.; Spanos, C.J.
2016-01-01
We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The
Diffraction of SH-waves by topographic features in a layered transversely isotropic half-space
Ba, Zhenning; Liang, Jianwen; Zhang, Yanju
2017-01-01
The scattering of plane SH-waves by topographic features in a layered transversely isotropic (TI) half-space is investigated by using an indirect boundary element method (IBEM). Firstly, the anti-plane dynamic stiffness matrix of the layered TI half-space is established and the free fields are solved by using the direct stiffness method. Then, Green's functions are derived for uniformly distributed loads acting on an inclined line in a layered TI half-space and the scattered fields are constructed with the deduced Green's functions. Finally, the free fields are added to the scattered ones to obtain the global dynamic responses. The method is verified by comparing results with the published isotropic ones. Both the steady-state and transient dynamic responses are evaluated and discussed. Numerical results in the frequency domain show that surface motions for the TI media can be significantly different from those for the isotropic case, which are strongly dependent on the anisotropy property, incident angle and incident frequency. Results in the time domain show that the material anisotropy has important effects on the maximum duration and maximum amplitudes of the time histories.
Integral equations with difference kernels on finite intervals
Sakhnovich, Lev A
2015-01-01
This book focuses on solving integral equations with difference kernels on finite intervals. The corresponding problem on the semiaxis was previously solved by N. Wiener–E. Hopf and by M.G. Krein. The problem on finite intervals, though significantly more difficult, may be solved using our method of operator identities. This method is also actively employed in inverse spectral problems, operator factorization and nonlinear integral equations. Applications of the obtained results to optimal synthesis, light scattering, diffraction, and hydrodynamics problems are discussed in this book, which also describes how the theory of operators with difference kernels is applied to stable processes and used to solve the famous M. Kac problems on stable processes. In this second edition these results are extensively generalized and include the case of all Levy processes. We present the convolution expression for the well-known Ito formula of the generator operator, a convolution expression that has proven to be fruitful...
Directory of Open Access Journals (Sweden)
Senyue Zhang
2016-01-01
Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.
Quantum scattering at low energies
DEFF Research Database (Denmark)
Derezinski, Jan; Skibsted, Erik
2009-01-01
For a class of negative slowly decaying potentials, including V(x):=−γ|x|−μ with 0quantum mechanical scattering theory in the low-energy regime. Using appropriate modifiers of the Isozaki–Kitada type we show that scattering theory is well behaved on the whole continuous spectrum...... of the Hamiltonian, including the energy 0. We show that the modified scattering matrices S(λ) are well-defined and strongly continuous down to the zero energy threshold. Similarly, we prove that the modified wave matrices and generalized eigenfunctions are norm continuous down to the zero energy if we use...... of the kernel of S(λ) experiences an abrupt change from passing from positive energies λ to the limiting energy λ=0 . This change corresponds to the behaviour of the classical orbits. Under stronger conditions one can extract the leading term of the asymptotics of the kernel of S(λ) at its singularities....
Buck, Christoph; Kneib, Thomas; Tkaczick, Tobias; Konstabel, Kenn; Pigeot, Iris
2015-12-22
Built environment studies provide broad evidence that urban characteristics influence physical activity (PA). However, findings are still difficult to compare, due to inconsistent measures assessing urban point characteristics and varying definitions of spatial scale. Both were found to influence the strength of the association between the built environment and PA. We simultaneously evaluated the effect of kernel approaches and network-distances to investigate the association between urban characteristics and physical activity depending on spatial scale and intensity measure. We assessed urban measures of point characteristics such as intersections, public transit stations, and public open spaces in ego-centered network-dependent neighborhoods based on geographical data of one German study region of the IDEFICS study. We calculated point intensities using the simple intensity and kernel approaches based on fixed bandwidths, cross-validated bandwidths including isotropic and anisotropic kernel functions and considering adaptive bandwidths that adjust for residential density. We distinguished six network-distances from 500 m up to 2 km to calculate each intensity measure. A log-gamma regression model was used to investigate the effect of each urban measure on moderate-to-vigorous physical activity (MVPA) of 400 2- to 9.9-year old children who participated in the IDEFICS study. Models were stratified by sex and age groups, i.e. pre-school children (2 to kernel approaches. Smallest variation in effect estimates over network-distances was found for kernel intensity measures based on isotropic and anisotropic cross-validated bandwidth selection. We found a strong variation in the association between the built environment and PA of children based on the choice of intensity measure and network-distance. Kernel intensity measures provided stable results over various scales and improved the assessment compared to the simple intensity measure. Considering different spatial
Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan
2018-05-01
With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Computations of Quasiconvex Hulls of Isotropic Sets
Czech Academy of Sciences Publication Activity Database
Heinz, S.; Kružík, Martin
2017-01-01
Roč. 24, č. 2 (2017), s. 477-492 ISSN 0944-6532 R&D Projects: GA ČR GA14-15264S; GA ČR(CZ) GAP201/12/0671 Institutional support: RVO:67985556 Keywords : quasiconvexity * isotropic compact sets * matrices Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.496, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/kruzik-0474874.pdf
Depression of nonlinearity in decaying isotropic turbulence
International Nuclear Information System (INIS)
Kraichnan, R.H.; Panda, R.
1988-01-01
Simulations of decaying isotropic Navier--Stokes turbulence exhibit depression of the normalized mean-square nonlinear term to 57% of the value for a Gaussianly distributed velocity field with the same instantaneous velocity spectrum. Similar depression is found for dynamical models with random coupling coefficients (modified Betchov models). This suggests that the depression is dynamically generic rather than specifically driven by alignment of velocity and vorticity
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America
New criteria for isotropic and textured metals
Cazacu, Oana
2018-05-01
In this paper a isotropic criterion expressed in terms of both invariants of the stress deviator, J2 and J3 is proposed. This criterion involves a unique parameter, α, which depends only on the ratio between the yield stresses in uniaxial tension and pure shear. If this parameter is zero, the von Mises yield criterion is recovered; if a is positive the yield surface is interior to the von Mises yield surface whereas when a is negative, the new yield surface is exterior to it. Comparison with polycrystalline calculations using Taylor-Bishop-Hill model [1] for randomly oriented face-centered (FCC) polycrystalline metallic materials show that this new criterion captures well the numerical yield points. Furthermore, the criterion reproduces well yielding under combined tension-shear loadings for a variety of isotropic materials. An extension of this isotropic yield criterion such as to account for orthotropy in yielding is developed using the generalized invariants approach of Cazacu and Barlat [2]. This new orthotropic criterion is general and applicable to three-dimensional stress states. The procedure for the identification of the material parameters is outlined. Illustration of the predictive capabilities of the new orthotropic is demonstrated through comparison between the model predictions and data on aluminum sheet samples.
Credit scoring analysis using kernel discriminant
Widiharih, T.; Mukid, M. A.; Mustafid
2018-05-01
Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.
Testing Infrastructure for Operating System Kernel Development
DEFF Research Database (Denmark)
Walter, Maxwell; Karlsson, Sven
2014-01-01
Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....
Kernel parameter dependence in spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....
Energy Technology Data Exchange (ETDEWEB)
Ghrayeb, S. Z. [Dept. of Mechanical and Nuclear Engineering, Pennsylvania State Univ., 230 Reber Building, Univ. Park, PA 16802 (United States); Ouisloumen, M. [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States); Ougouag, A. M. [Idaho National Laboratory, MS-3860, PO Box 1625, Idaho Falls, ID 83415 (United States); Ivanov, K. N.
2012-07-01
A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)
A new kernel discriminant analysis framework for electronic nose recognition
International Nuclear Information System (INIS)
Zhang, Lei; Tian, Feng-Chun
2014-01-01
Graphical abstract: - Highlights: • This paper proposes a new discriminant analysis framework for feature extraction and recognition. • The principle of the proposed NDA is derived mathematically. • The NDA framework is coupled with kernel PCA for classification. • The proposed KNDA is compared with state of the art e-Nose recognition methods. • The proposed KNDA shows the best performance in e-Nose experiments. - Abstract: Electronic nose (e-Nose) technology based on metal oxide semiconductor gas sensor array is widely studied for detection of gas components. This paper proposes a new discriminant analysis framework (NDA) for dimension reduction and e-Nose recognition. In a NDA, the between-class and the within-class Laplacian scatter matrix are designed from sample to sample, respectively, to characterize the between-class separability and the within-class compactness by seeking for discriminant matrix to simultaneously maximize the between-class Laplacian scatter and minimize the within-class Laplacian scatter. In terms of the linear separability in high dimensional kernel mapping space and the dimension reduction of principal component analysis (PCA), an effective kernel PCA plus NDA method (KNDA) is proposed for rapid detection of gas mixture components by an e-Nose. The NDA framework is derived in this paper as well as the specific implementations of the proposed KNDA method in training and recognition process. The KNDA is examined on the e-Nose datasets of six kinds of gas components, and compared with state of the art e-Nose classification methods. Experimental results demonstrate that the proposed KNDA method shows the best performance with average recognition rate and total recognition rate as 94.14% and 95.06% which leads to a promising feature extraction and multi-class recognition in e-Nose
Evaluation of sintering effects on SiC-incorporated UO2 kernels under Ar and Ar–4%H2 environments
International Nuclear Information System (INIS)
Silva, Chinthaka M.; Lindemer, Terrence B.; Hunt, Rodney D.; Collins, Jack L.; Terrani, Kurt A.; Snead, Lance L.
2013-01-01
Silicon carbide (SiC) is suggested as an oxygen getter in UO 2 kernels used for tristructural isotropic (TRISO) particle fuels and to prevent kernel migration during irradiation. Scanning electron microscopy and X-ray diffractometry analyses performed on sintered kernels verified that an internal gelation process can be used to incorporate SiC in UO 2 fuel kernels. Even though the presence of UC in either argon (Ar) or Ar–4%H 2 sintered samples suggested a lowering of the SiC up to 3.5–1.4 mol%, respectively, the presence of other silicon-related chemical phases indicates the preservation of silicon in the kernels during sintering process. UC formation was presumed to occur by two reactions. The first was by the reaction of SiC with its protective SiO 2 oxide layer on SiC grains to produce volatile SiO and free carbon that subsequently reacted with UO 2 to form UC. The second process was direct UO 2 reaction with SiC grains to form SiO, CO, and UC. A slightly higher density and UC content were observed in the sample sintered in Ar–4%H 2 , but both atmospheres produced kernels with ∼95% of theoretical density. It is suggested that incorporating CO in the sintering gas could prevent UC formation and preserve the initial SiC content
RKRD: Runtime Kernel Rootkit Detection
Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.
In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.
Kernel Bayesian ART and ARTMAP.
Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan
2018-02-01
Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
The scattering properties of anisotropic dielectric spheres on electromagnetic waves
International Nuclear Information System (INIS)
Chen Hui; Zhang Weiyi; Wang Zhenlin; Ming Naiben
2004-01-01
The scattering coefficients of spheres with dielectric anisotropy are calculated analytically in this paper using the perturbation method. It is found that the different modes of vector spherical harmonics and polarizations are coupled together in the scattering coefficients (c-matrix) in contrast to the isotropic case where all modes are decoupled from each other. The generalized c-matrix is then incorporated into our codes for a vector wave multiple scattering program; the preliminary results on face centred cubic structure show that dielectric anisotropy reduces the symmetry of the scattering c-matrix and removes the degeneracy in photonic band structures composed of isotropic dielectric spheres
International Nuclear Information System (INIS)
Gao, Zhiwen; Zhou, Youhe
2015-01-01
Highlights: • We studied fracture problem in HTS based on real fundamental solutions. • When the thickness of HTS strip increases the SIF decrease. • A higher applied field leads to a larger stress intensity factor. • The greater the critical current density is, the smaller values of the SIF is. - Abstract: Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E–J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss–Lobatto–Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed
Interbasis expansions for isotropic harmonic oscillator
Energy Technology Data Exchange (ETDEWEB)
Dong, Shi-Hai, E-mail: dongsh2@yahoo.com [Departamento de Física, Escuela Superior de Física y Matemáticas, Instituto Politécnico Nacional, Edificio 9, Unidad Profesional Adolfo López Mateos, Mexico D.F. 07738 (Mexico)
2012-03-12
The exact solutions of the isotropic harmonic oscillator are reviewed in Cartesian, cylindrical polar and spherical coordinates. The problem of interbasis expansions of the eigenfunctions is solved completely. The explicit expansion coefficients of the basis for given coordinates in terms of other two coordinates are presented for lower excited states. Such a property is occurred only for those degenerated states for given principal quantum number n. -- Highlights: ► Exact solutions of harmonic oscillator are reviewed in three coordinates. ► Interbasis expansions of the eigenfunctions is solved completely. ► This is occurred only for those degenerated states for given quantum number n.
Isotropic Broadband E-Field Probe
Directory of Open Access Journals (Sweden)
Béla Szentpáli
2008-01-01
Full Text Available An E-field probe has been developed for EMC immunity tests performed in closed space. The leads are flexible resistive transmission lines. Their influence on the field distribution is negligible. The probe has an isotropic reception from 100 MHz to 18 GHz; the sensitivity is in the 3 V/m–10 V/m range. The device is an accessory of the EMC test chamber. The readout of the field magnitude is carried out by personal computer, which fulfils also the required corrections of the raw data.
Gravitational instability in isotropic MHD plasma waves
Cherkos, Alemayehu Mengesha
2018-04-01
The effect of compressive viscosity, thermal conductivity and radiative heat-loss functions on the gravitational instability of infinitely extended homogeneous MHD plasma has been investigated. By taking in account these parameters we developed the six-order dispersion relation for magnetohydrodynamic (MHD) waves propagating in a homogeneous and isotropic plasma. The general dispersion relation has been developed from set of linearized basic equations and solved analytically to analyse the conditions of instability and instability of self-gravitating plasma embedded in a constant magnetic field. Our result shows that the presence of viscosity and thermal conductivity in a strong magnetic field substantially modifies the fundamental Jeans criterion of gravitational instability.
Active isotropic slabs: conditions for amplified reflection
Perez, Liliana I.; Matteo, Claudia L.; Etcheverry, Javier; Duplaá, María Celeste
2012-12-01
We analyse in detail the necessary conditions to obtain amplified reflection (AR) in isotropic interfaces when a plane wave propagates from a transparent medium towards an active one. First, we demonstrate analytically that AR is not possible if a single interface is involved. Then, we study the conditions for AR in a very simple configuration: normal incidence on an active slab immersed in transparent media. Finally, we develop an analysis in the complex plane in order to establish a geometrical method that not only describes the behaviour of active slabs but also helps to simplify the calculus.
Active isotropic slabs: conditions for amplified reflection
International Nuclear Information System (INIS)
Perez, Liliana I; Duplaá, María Celeste; Matteo, Claudia L; Etcheverry, Javier
2012-01-01
We analyse in detail the necessary conditions to obtain amplified reflection (AR) in isotropic interfaces when a plane wave propagates from a transparent medium towards an active one. First, we demonstrate analytically that AR is not possible if a single interface is involved. Then, we study the conditions for AR in a very simple configuration: normal incidence on an active slab immersed in transparent media. Finally, we develop an analysis in the complex plane in order to establish a geometrical method that not only describes the behaviour of active slabs but also helps to simplify the calculus. (paper)
Selection and properties of alternative forming fluids for TRISO fuel kernel production
Energy Technology Data Exchange (ETDEWEB)
Baker, M.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); King, J.C., E-mail: kingjc@mines.edu [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Gorman, B.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Marshall, D.W. [Idaho National Laboratory, 2525 N. Fremont Avenue, P.O. Box 1625, Idaho Falls, ID 83415 (United States)
2013-01-15
Highlights: Black-Right-Pointing-Pointer Forming fluid selection criteria developed for TRISO kernel production. Black-Right-Pointing-Pointer Ten candidates selected for further study. Black-Right-Pointing-Pointer Density, viscosity, and surface tension measured for first time. Black-Right-Pointing-Pointer Settling velocity and heat transfer rates calculated. Black-Right-Pointing-Pointer Three fluids recommended for kernel production testing. - Abstract: Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of {approx}10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 Degree-Sign C and 80 Degree-Sign C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory
Theory of reproducing kernels and applications
Saitoh, Saburou
2016-01-01
This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-02-12
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-01-01
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Kernel principal component analysis for change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Morton, J.C.
2008-01-01
region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....
Acoustic reflection log in transversely isotropic formations
Ronquillo Jarillo, G.; Markova, I.; Markov, M.
2018-01-01
We have calculated the waveforms of sonic reflection logging for a fluid-filled borehole located in a transversely isotropic rock. Calculations have been performed for an acoustic impulse source with the characteristic frequency of tens of kilohertz that is considerably less than the frequencies of acoustic borehole imaging tools. It is assumed that the borehole axis coincides with the axis of symmetry of the transversely isotropic rock. It was shown that the reflected wave was excited most efficiently at resonant frequencies. These frequencies are close to the frequencies of oscillations of a fluid column located in an absolutely rigid hollow cylinder. We have shown that the acoustic reverberation is controlled by the acoustic impedance of the rock Z = Vphρs for fixed parameters of the borehole fluid, where Vph is the velocity of horizontally propagating P-wave; ρs is the rock density. The methods of waveform processing to determine the parameters characterizing the reflected wave have been discussed.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning
Process for producing metal oxide kernels and kernels so obtained
International Nuclear Information System (INIS)
Lelievre, Bernard; Feugier, Andre.
1974-01-01
The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr
Zito, Gianluigi; Rusciano, Giulia; Pesce, Giuseppe; Dochshanov, Alden; Sasso, Antonio
2015-04-01
Label-free chemical imaging of live cell membranes can shed light on the molecular basis of cell membrane functionalities and their alterations under membrane-related diseases. In principle, this can be done by surface-enhanced Raman scattering (SERS) in confocal microscopy, but requires engineering plasmonic architectures with a spatially invariant SERS enhancement factor G(x, y) = G. To this end, we exploit a self-assembled isotropic nanostructure with characteristics of homogeneity typical of the so-called near-hyperuniform disorder. The resulting highly dense, homogeneous and isotropic random pattern consists of clusters of silver nanoparticles with limited size dispersion. This nanostructure brings together several advantages: very large hot spot density (~104 μm-2), superior spatial reproducibility (SD nanotoxicity issues. See DOI: 10.1039/c5nr01341k
Hilbertian kernels and spline functions
Atteia, M
1992-01-01
In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.
Lu, San; Artemyev, A. V.; Angelopoulos, V.
2017-11-01
Magnetotail current sheet thinning is a distinctive feature of substorm growth phase, during which magnetic energy is stored in the magnetospheric lobes. Investigation of charged particle dynamics in such thinning current sheets is believed to be important for understanding the substorm energy storage and the current sheet destabilization responsible for substorm expansion phase onset. We use Time History of Events and Macroscale Interactions during Substorms (THEMIS) B and C observations in 2008 and 2009 at 18 - 25 RE to show that during magnetotail current sheet thinning, the electron temperature decreases (cooling), and the parallel temperature decreases faster than the perpendicular temperature, leading to a decrease of the initially strong electron temperature anisotropy (isotropization). This isotropization cannot be explained by pure adiabatic cooling or by pitch angle scattering. We use test particle simulations to explore the mechanism responsible for the cooling and isotropization. We find that during the thinning, a fast decrease of a parallel electric field (directed toward the Earth) can speed up the electron parallel cooling, causing it to exceed the rate of perpendicular cooling, and thus lead to isotropization, consistent with observation. If the parallel electric field is too small or does not change fast enough, the electron parallel cooling is slower than the perpendicular cooling, so the parallel electron anisotropy grows, contrary to observation. The same isotropization can also be accomplished by an increasing parallel electric field directed toward the equatorial plane. Our study reveals the existence of a large-scale parallel electric field, which plays an important role in magnetotail particle dynamics during the current sheet thinning process.
Numerical study of the thermal degradation of isotropic and anisotropic polymeric materials
Energy Technology Data Exchange (ETDEWEB)
Soler, E. [Departamento de Lenguajes y Ciencias de la Computacion, ETSI Informatica, Universidad de Malaga, 29071 Malaga (Spain); Ramos, J.I. [Room I-320-D, ETS Ingenieros Industriales, Universidad de Malaga, Plaza El Ejido, s/n, 29013 Malaga (Spain)
2005-08-01
The thermal degradation of two-dimensional isotropic, orthotropic and anisotropic polymeric materials is studied numerically by means of a second-order accurate (in both space and time) linearly implicit finite difference formulation which results in linear algebraic equations at each time step. It is shown that, for both isotropic and orthotropic composites, the monomer mass diffusion tensor plays a role in initiating the polymerization kinetics, the formation of a polymerization kernel and the initial front propagation, whereas the later stages of the polymerization are nearly independent of the monomer mass diffusion tensor. In anisotropic polymeric composites, it has been found that the monomer mass diffusion tensor plays a paramount role in determining the initial stages of the polymerization and the subsequent propagation of the polymerization front, the direction and speed of propagation of which are found to be related to the principal directions of both the monomer mass and the heat diffusion tensors. It is also shown that the polymerization time and temperatures depend strongly on the anisotropy of the mass and heat diffusion tensors. (authors)
Analysis of fast neutrons elastic moderator through exact solutions involving synthetic-kernels
International Nuclear Information System (INIS)
Moura Neto, C.; Chung, F.L.; Amorim, E.S.
1979-07-01
The computation difficulties in the transport equation solution applied to fast reactors can be reduced by the development of approximate models, assuming that the continuous moderation holds. Two approximations were studied. The first one was based on an expansion in Taylor's series (Fermi, Wigner, Greuling and Goertzel models), and the second involving the utilization of synthetic Kernels (Walti, Turinsky, Becker and Malaviya models). The flux obtained by the exact method is compared with the fluxes from the different models based on synthetic Kernels. It can be verified that the present study is realistic for energies smaller than the threshold for inelastic scattering, as well as in the resonance region. (Author) [pt
Dense Medium Machine Processing Method for Palm Kernel/ Shell ...
African Journals Online (AJOL)
ADOWIE PERE
Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.
A tilted transversely isotropic slowness surface approximation
Stovas, A.
2012-05-09
The relation between vertical and horizontal slownesses, better known as the dispersion relation, for transversely isotropic media with a tilted symmetry axis (TTI) requires solving a quartic polynomial equation, which does not admit a practical explicit solution to be used, for example, in downward continuation. Using a combination of the perturbation theory with respect to the anelliptic parameter and Shanks transform to improve the accuracy of the expansion, we develop an explicit formula for the vertical slowness that is highly accurate for all practical purposes. It also reveals some insights into the anisotropy parameter dependency of the dispersion relation including the low impact that the anelliptic parameter has on the vertical placement of reflectors for a small tilt in the symmetry angle. © 2012 European Association of Geoscientists & Engineers.
Linearized holographic isotropization at finite coupling
Energy Technology Data Exchange (ETDEWEB)
Atashi, Mahdi; Fadafan, Kazem Bitaghsir [Shahrood University of Technology, Physics Department (Iran, Islamic Republic of); Jafari, Ghadir [Institute for Research in Fundamental Sciences (IPM), School of Physics, Tehran (Iran, Islamic Republic of)
2017-06-15
We study holographic isotropization of an anisotropic homogeneous non-Abelian strongly coupled plasma in the presence of Gauss-Bonnet corrections. It was verified before that one can linearize Einstein's equations around the final black hole background and simplify the complicated setup. Using this approach, we study the expectation value of the boundary stress tensor. Although we consider small values of the Gauss-Bonnet coupling constant, it is found that finite coupling leads to significant increasing of the thermalization time. By including higher order corrections in linearization, we extend the results to study the effect of the Gauss-Bonnet coupling on the entropy production on the event horizon. (orig.)
Effective elastic properties of damaged isotropic solids
International Nuclear Information System (INIS)
Lee, U Sik
1998-01-01
In continuum damage mechanics, damaged solids have been represented by the effective elastic stiffness into which local damage is smoothly smeared. Similarly, damaged solids may be represented in terms of effective elastic compliances. By virtue of the effective elastic compliance representation, it may become easier to derive the effective engineering constants of damaged solids from the effective elastic compliances, all in closed form. Thus, in this paper, by using a continuum modeling approach based on both the principle of strain energy equivalence and the equivalent elliptical micro-crack representation of local damage, the effective elastic compliance and effective engineering constants are derived in terms of the undamaged (virgin) elastic properties and a scalar damage variable for both damaged two-and three-dimensional isotropic solids
New bounds on isotropic Lorentz violation
International Nuclear Information System (INIS)
Carone, Christopher D.; Sher, Marc; Vanderhaeghen, Marc
2006-01-01
Violations of Lorentz invariance that appear via operators of dimension four or less are completely parametrized in the Standard Model Extension (SME). In the pure photonic sector of the SME, there are 19 dimensionless, Lorentz-violating parameters. Eighteen of these have experimental upper bounds ranging between 10 -11 and 10 -32 ; the remaining parameter, k-tilde tr , is isotropic and has a much weaker bound of order 10 -4 . In this Brief Report, we point out that k-tilde tr gives a significant contribution to the anomalous magnetic moment of the electron and find a new upper bound of order 10 -8 . With reasonable assumptions, we further show that this bound may be improved to 10 -14 by considering the renormalization of other Lorentz-violating parameters that are more tightly constrained. Using similar renormalization arguments, we also estimate bounds on Lorentz-violating parameters in the pure gluonic sector of QCD
Ranking Support Vector Machine with Kernel Approximation
Directory of Open Access Journals (Sweden)
Kai Chen
2017-01-01
Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Sentiment classification with interpolated information diffusion kernels
Raaijmakers, S.
2007-01-01
Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of
Evolution kernel for the Dirac field
International Nuclear Information System (INIS)
Baaquie, B.E.
1982-06-01
The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)
Panel data specifications in nonparametric kernel regression
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard; Henningsen, Arne
parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...
Improving the Bandwidth Selection in Kernel Equating
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
Kernel Korner : The Linux keyboard driver
Brouwer, A.E.
1995-01-01
Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the
Metabolic network prediction through pairwise rational kernels.
Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian
2014-09-26
Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy
Formal solutions of inverse scattering problems. III
International Nuclear Information System (INIS)
Prosser, R.T.
1980-01-01
The formal solutions of certain three-dimensional inverse scattering problems presented in papers I and II of this series [J. Math. Phys. 10, 1819 (1969); 17 1175 (1976)] are obtained here as fixed points of a certain nonlinear mapping acting on a suitable Banach space of integral kernels. When the scattering data are sufficiently restricted, this mapping is shown to be a contraction, thereby establishing the existence, uniqueness, and continuous dependence on the data of these formal solutions
New statistical model of inelastic fast neutron scattering
International Nuclear Information System (INIS)
Stancicj, V.
1975-07-01
A new statistical model for treating the fast neutron inelastic scattering has been proposed by using the general expressions of the double differential cross section in impuls approximation. The use of the Fermi-Dirac distribution of nucleons makes it possible to derive an analytical expression of the fast neutron inelastic scattering kernel including the angular momenta coupling. The obtained values of the inelastic fast neutron cross section calculated from the derived expression of the scattering kernel are in a good agreement with the experiments. A main advantage of the derived expressions is in their simplicity for the practical calculations
Optimized cylindrical invisibility cloak with minimum layers of non-magnetic isotropic materials
International Nuclear Information System (INIS)
Yu Zhenzhong; Feng Yijun; Xu Xiaofei; Zhao Junming; Jiang Tian
2011-01-01
We present optimized design of cylindrical invisibility cloak with minimum layers of non-magnetic isotropic materials. Through an optimization procedure based on genetic algorithm, simpler cloak structure and more realizable material parameters can be achieved with better cloak performance than that of an ideal non-magnetic cloak with a reduced set of parameters. We demonstrate that a cloak shell with only five layers of two normal materials can result in an average 20 dB reduction in the scattering width for all directions when covering the inner conducting cylinder with the cloak. The optimized design can substantially simplify the realization of the invisibility cloak, especially in the optical range.
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.
Anisotropy function for pion-proton elastic scattering
Energy Technology Data Exchange (ETDEWEB)
Saleem, Mohammad; Fazal-e-Aleem; Rashid, Haris
1988-09-01
By using the generalised Chou-Yang model and the experimental data on ..pi../sup -/p elastic scattering at 200 GeV/c, the anisotropy function which reflects the non-isotropic nature of elastic scattering is computed for the reaction ..pi../sup -/p -> ..pi../sup -/p.
Anisotropy function for proton-proton elastic scattering
Energy Technology Data Exchange (ETDEWEB)
Saleem, Mohammad; Fazal-e-Aleem; Azhar, I.A. (Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics)
1990-07-01
By using the generalized Chou-Yang model and the experimental data on pp elastic scattering at 53 GeV, the anisotropy function which reflects the non-isotropic nature of elastic scattering is computed for the reaction pp{yields}pp. (author).
Anisotropy function for proton-proton elastic scattering
International Nuclear Information System (INIS)
Saleem, Mohammad; Fazal-e-Aleem; Azhar, I.A.
1990-01-01
By using the generalized Chou-Yang model and the experimental data on pp elastic scattering at 53 GeV, the anisotropy function which reflects the non-isotropic nature of elastic scattering is computed for the reaction pp→pp. (author)
Anisotropy function for pion-proton elastic scattering
International Nuclear Information System (INIS)
Saleem, Mohammad; Fazal-e-Aleem; Rashid, Haris
1988-01-01
By using the generalised Chou-Yang model and the experimental data on π - p elastic scattering at 200 GeV/c, the anisotropy function which reflects the non-isotropic nature of elastic scattering is computed for the reaction π - p → π - p. (author)
On the decay of homogeneous isotropic turbulence
Skrbek, L.; Stalp, Steven R.
2000-08-01
Decaying homogeneous, isotropic turbulence is investigated using a phenomenological model based on the three-dimensional turbulent energy spectra. We generalize the approach first used by Comte-Bellot and Corrsin [J. Fluid Mech. 25, 657 (1966)] and revised by Saffman [J. Fluid Mech. 27, 581 (1967); Phys. Fluids 10, 1349 (1967)]. At small wave numbers we assume the spectral energy is proportional to the wave number to an arbitrary power. The specific case of power 2, which follows from the Saffman invariant, is discussed in detail and is later shown to best describe experimental data. For the spectral energy density in the inertial range we apply both the Kolmogorov -5/3 law, E(k)=Cɛ2/3k-5/3, and the refined Kolmogorov law by taking into account intermittency. We show that intermittency affects the energy decay mainly by shifting the position of the virtual origin rather than altering the power law of the energy decay. Additionally, the spectrum is naturally truncated due to the size of the wind tunnel test section, as eddies larger than the physical size of the system cannot exist. We discuss effects associated with the energy-containing length scale saturating at the size of the test section and predict a change in the power law decay of both energy and vorticity. To incorporate viscous corrections to the model, we truncate the spectrum at an effective Kolmogorov wave number kη=γ(ɛ/v3)1/4, where γ is a dimensionless parameter of order unity. We show that as the turbulence decays, viscous corrections gradually become more important and a simple power law can no longer describe the decay. We discuss the final period of decay within the framework of our model, and show that care must be taken to distinguish between the final period of decay and the change of the character of decay due to the saturation of the energy containing length scale. The model is applied to a number of experiments on decaying turbulence. These include the downstream decay of turbulence in
Isotropic compression of cohesive-frictional particles with rolling resistance
Luding, Stefan; Benz, Thomas; Nordal, Steinar
2010-01-01
Cohesive-frictional and rough powders are the subject of this study. The behavior under isotropic compression is examined for different material properties involving Coulomb friction, rolling-resistance and contact-adhesion. Under isotropic compression, the density continuously increases according
The revised geometric measure of entanglement for isotropic state
International Nuclear Information System (INIS)
Cao Ya
2011-01-01
Based on the revised geometric measure of entanglement (RGME), we obtain the analytical expression of isotropic state and generalize to n-particle and d-dimension mixed state case. Meantime, we obtain the relation about isotropic state E-tilde sin 2 (ρ) ≤ E re (ρ). The results indicate RGME is an appropriate measure of entanglement. (authors)
Contact mechanics and friction for transversely isotropic viscoelastic materials
Mokhtari, Milad; Schipper, Dirk J.; Vleugels, N.; Noordermeer, Jacobus W.M.; Yoshimoto, S.; Hashimoto, H.
2015-01-01
Transversely isotropic materials are an unique group of materials whose properties are the same along two of the principal axes of a Cartesian coordinate system. Various natural and artificial materials behave effectively as transversely isotropic elastic solids. Several materials can be classified
Yurkin, Maxim A.; Mishchenko, Michael I.
2018-04-01
We present a general derivation of the frequency-domain volume integral equation (VIE) for the electric field inside a nonmagnetic scattering object from the differential Maxwell equations, transmission boundary conditions, radiation condition at infinity, and locally-finite-energy condition. The derivation applies to an arbitrary spatially finite group of particles made of isotropic materials and embedded in a passive host medium, including those with edges, corners, and intersecting internal interfaces. This is a substantially more general type of scatterer than in all previous derivations. We explicitly treat the strong singularity of the integral kernel, but keep the entire discussion accessible to the applied scattering community. We also consider the known results on the existence and uniqueness of VIE solution and conjecture a general sufficient condition for that. Finally, we discuss an alternative way of deriving the VIE for an arbitrary object by means of a continuous transformation of the everywhere smooth refractive-index function into a discontinuous one. Overall, the paper examines and pushes forward the state-of-the-art understanding of various analytical aspects of the VIE.
Putting Priors in Mixture Density Mercer Kernels
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Geometrical effects in X-mode scattering
International Nuclear Information System (INIS)
Bretz, N.
1986-10-01
One technique to extend microwave scattering as a probe of long wavelength density fluctuations in magnetically confined plasmas is to consider the launching and scattering of extraordinary (X-mode) waves nearly perpendicular to the field. When the incident frequency is less than the electron cyclotron frequency, this mode can penetrate beyond the ordinary mode cutoff at the plasma frequency and avoid significant distortions from density gradients typical of tokamak plasmas. In the more familiar case, where the incident and scattered waves are ordinary, the scattering is isotropic perpendicular to the field. However, because the X-mode polarization depends on the frequency ratios and the ray angle to the magnetic field, the coupling between the incident and scattered waves is complicated. This geometrical form factor must be unfolded from the observed scattering in order to interpret the scattering due to density fluctuations alone. The geometrical factor is calculated here for the special case of scattering perpendicular to the magnetic field. For frequencies above the ordinary mode cutoff the scattering is relatively isotropic, while below cutoff there are minima in the forward and backward directions which go to zero at approximately half the ordinary mode cutoff density
Irreducible kernels and nonperturbative expansions in a theory with pure m -> m interaction
International Nuclear Information System (INIS)
Iagolnitzer, D.
1983-01-01
Recent results on the structure of the S matrix at the m-particle threshold (m>=2) in a simplified m->m scattering theory with no subchannel interaction are extended to the Green function F on the basis of off-shell unitarity, through an adequate mathematical extension of some results of Fredholm theory: local two-sheeted or infinite-sheeted structure of F around s=(mμ) 2 depending on the parity of (m-1) (ν-1) (where μ>0 is the mass and ν is the dimension of space-time), off-shell definition of the irreducible kernel U [which is the analogue of the K matrix in the two different parity cases (m-1)(ν-1) odd or even] and related local expansion of F, for (m-1)(ν-1) even, in powers of sigmasup(β)lnsigma(sigma=(mμ) 2 -s). It is shown that each term in this expansion is the dominant contribution to a Feynman-type integral in which each vertex is a kernel U. The links between kernel U and Bethe-Salpeter type kernels G of the theory are exhibited in both parity cases, as also the links between the above expansion of F and local expansions, in the Bethe-Salpeter type framework, of Fsub(lambda) in terms of Feynman-type integrals in which each vertex is a kernel G and which include both dominant and subdominant contributions. (orig.)
How isotropic can the UHECR flux be?
di Matteo, Armando; Tinyakov, Peter
2018-05-01
Modern observatories of ultra-high energy cosmic rays (UHECR) have collected over 104 events with energies above 10 EeV, whose arrival directions appear to be nearly isotropically distributed. On the other hand, the distribution of matter in the nearby Universe - and therefore presumably also that of UHECR sources - is not homogeneous. This is expected to leave an imprint on the angular distribution of UHECR arrival directions, though deflections by cosmic magnetic fields can confound the picture. In this work, we investigate quantitatively this apparent inconsistency. To this end we study observables sensitive to UHECR source inhomogeneities but robust to uncertainties on magnetic fields and the UHECR mass composition. We show, in a rather model-independent way, that if the source distribution tracks the overall matter distribution, the arrival directions at energies above 30 EeV should exhibit a sizeable dipole and quadrupole anisotropy, detectable by UHECR observatories in the very near future. Were it not the case, one would have to seriously reconsider the present understanding of cosmic magnetic fields and/or the UHECR composition. Also, we show that the lack of a strong quadrupole moment above 10 EeV in the current data already disfavours a pure proton composition, and that in the very near future measurements of the dipole and quadrupole moment above 60 EeV will be able to provide evidence about the UHECR mass composition at those energies.
On isotropic cylindrically symmetric stellar models
International Nuclear Information System (INIS)
Nolan, Brien C; Nolan, Louise V
2004-01-01
We attempt to match the most general cylindrically symmetric vacuum spacetime with a Robertson-Walker interior. The matching conditions show that the interior must be dust filled and that the boundary must be comoving. Further, we show that the vacuum region must be polarized. Imposing the condition that there are no trapped cylinders on an initial time slice, we can apply a result of Thorne's and show that trapped cylinders never evolve. This results in a simplified line element which we prove to be incompatible with the dust interior. This result demonstrates the impossibility of the existence of an isotropic cylindrically symmetric star (or even a star which has a cylindrically symmetric portion). We investigate the problem from a different perspective by looking at the expansion scalars of invariant null geodesic congruences and, applying to the cylindrical case, the result that the product of the signs of the expansion scalars must be continuous across the boundary. The result may also be understood in relation to recent results about the impossibility of the static axially symmetric analogue of the Einstein-Straus model
Lagrangian statistics in compressible isotropic homogeneous turbulence
Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi
2011-11-01
In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.
Nonlinear elastic inclusions in isotropic solids
Yavari, A.
2013-10-16
We introduce a geometric framework to calculate the residual stress fields and deformations of nonlinear solids with inclusions and eigenstrains. Inclusions are regions in a body with different reference configurations from the body itself and can be described by distributed eigenstrains. Geometrically, the eigenstrains define a Riemannian 3-manifold in which the body is stress-free by construction. The problem of residual stress calculation is then reduced to finding a mapping from the Riemannian material manifold to the ambient Euclidean space. Using this construction, we find the residual stress fields of three model systems with spherical and cylindrical symmetries in both incompressible and compressible isotropic elastic solids. In particular, we consider a finite spherical ball with a spherical inclusion with uniform pure dilatational eigenstrain and we show that the stress in the inclusion is uniform and hydrostatic. We also show how singularities in the stress distribution emerge as a consequence of a mismatch between radial and circumferential eigenstrains at the centre of a sphere or the axis of a cylinder.
Radiation statistics in homogeneous isotropic turbulence
International Nuclear Information System (INIS)
Da Silva, C B; Coelho, P J; Malico, I
2009-01-01
An analysis of turbulence-radiation interaction (TRI) in statistically stationary (forced) homogeneous and isotropic turbulence is presented. A direct numerical simulation code was used to generate instantaneous turbulent scalar fields, and the radiative transfer equation (RTE) was solved to provide statistical data relevant in TRI. The radiation intensity is non-Gaussian and is not spatially correlated with any of the other turbulence or radiation quantities. Its power spectrum exhibits a power-law region with a slope steeper than the classical -5/3 law. The moments of the radiation intensity, Planck-mean and incident-mean absorption coefficients, and emission and absorption TRI correlations are calculated. The influence of the optical thickness of the medium, mean and variance of the temperature and variance of the molar fraction of the absorbing species is studied. Predictions obtained from the time-averaged RTE are also included. It was found that while turbulence yields an increase of the mean blackbody radiation intensity, it causes a decrease of the time-averaged Planck-mean absorption coefficient. The absorption coefficient self-correlation is small in comparison with the temperature self-correlation, and the role of TRI in radiative emission is more important than in radiative absorption. The absorption coefficient-radiation intensity correlation is small, which supports the optically thin fluctuation approximation, and justifies the good predictions often achieved using the time-averaged RTE.
Radiation statistics in homogeneous isotropic turbulence
Energy Technology Data Exchange (ETDEWEB)
Da Silva, C B; Coelho, P J [Mechanical Engineering Department, IDMEC/LAETA, Instituto Superior Tecnico, Technical University of Lisbon, Av. Rovisco Pais, 1049-001 Lisboa (Portugal); Malico, I [Physics Department, University of Evora, Rua Romao Ramalho, 59, 7000-671 Evora (Portugal)], E-mail: carlos.silva@ist.utl.pt, E-mail: imbm@uevora.pt, E-mail: pedro.coelho@ist.utl.pt
2009-09-15
An analysis of turbulence-radiation interaction (TRI) in statistically stationary (forced) homogeneous and isotropic turbulence is presented. A direct numerical simulation code was used to generate instantaneous turbulent scalar fields, and the radiative transfer equation (RTE) was solved to provide statistical data relevant in TRI. The radiation intensity is non-Gaussian and is not spatially correlated with any of the other turbulence or radiation quantities. Its power spectrum exhibits a power-law region with a slope steeper than the classical -5/3 law. The moments of the radiation intensity, Planck-mean and incident-mean absorption coefficients, and emission and absorption TRI correlations are calculated. The influence of the optical thickness of the medium, mean and variance of the temperature and variance of the molar fraction of the absorbing species is studied. Predictions obtained from the time-averaged RTE are also included. It was found that while turbulence yields an increase of the mean blackbody radiation intensity, it causes a decrease of the time-averaged Planck-mean absorption coefficient. The absorption coefficient self-correlation is small in comparison with the temperature self-correlation, and the role of TRI in radiative emission is more important than in radiative absorption. The absorption coefficient-radiation intensity correlation is small, which supports the optically thin fluctuation approximation, and justifies the good predictions often achieved using the time-averaged RTE.
Formalism for neutron cross section covariances in the resonance region using kernel approximation
Energy Technology Data Exchange (ETDEWEB)
Oblozinsky, P.; Cho,Y-S.; Matoon,C.M.; Mughabghab,S.F.
2010-04-09
We describe analytical formalism for estimating neutron radiative capture and elastic scattering cross section covariances in the resolved resonance region. We use capture and scattering kernels as the starting point and show how to get average cross sections in broader energy bins, derive analytical expressions for cross section sensitivities, and deduce cross section covariances from the resonance parameter uncertainties in the recently published Atlas of Neutron Resonances. The formalism elucidates the role of resonance parameter correlations which become important if several strong resonances are located in one energy group. Importance of potential scattering uncertainty as well as correlation between potential scattering and resonance scattering is also examined. Practical application of the formalism is illustrated on {sup 55}Mn(n,{gamma}) and {sup 55}Mn(n,el).
International Nuclear Information System (INIS)
Henderson, D.L.
1987-01-01
Time-dependent integral transport equation flux and current kernels for plane and spherical geometry are derived for homogeneous media. Using the multiple collision formalism, isotropic sources that are delta distributions in time are considered for four different problems. The plane geometry flux kernel is applied to a uniformly distributed source within an infinite medium and to a surface source in a semi-infinite medium. The spherical flux kernel is applied to a point source in an infinite medium and to a point source at the origin of a finite sphere. The time-dependent first-flight leakage rates corresponding to the existing steady state first-flight escape probabilities are computed by the Laplace transform technique assuming a delta distribution source in time. The case of a constant source emitting neutrons over a time interval, Δt, for a spatially uniform source is obtained for a slab and a sphere. Time-dependent first-flight leakage rates are also determined for the general two region spherical medium problem for isotropic sources with a delta distribution in time uniformly distributed throughout both the inner and outer regions. The time-dependent collision rates due to the uncollided neutrons are computed for a slab and a sphere using the time-dependent first-flight leakage rates and the time-dependent continuity equation. The case of a constant source emitting neutrons over a time interval, Δt, is also considered
Application of Van Hove theory to fast neutron inelastic scattering
International Nuclear Information System (INIS)
Stanicicj, V.
1974-11-01
The Vane Hove general theory of the double differential scattering cross section has been used to derive the particular expressions of the inelastic fast neutrons scattering kernel and scattering cross section. Since the considered energies of incoming neutrons being less than 10 MeV, it enables to use the Fermi gas model of nucleons. In this case it was easy to derive an analytical expression for the time-dependent correlation function of the nucleus. Further, by using an impulse approximation and a short-collision time approach, it was possible to derive the analytical expression of the scattering kernel and scattering cross section for the fast neutron inelastic scattering. The obtained expressions have been used for Fe nucleus. It has been shown a surprising agreement with the experiments. The main advantage of this theory is in its simplicity for some practical calculations and for some theoretical investigations of nuclear processes
Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm
African Journals Online (AJOL)
In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...
NLO corrections to the Kernel of the BKP-equations
Energy Technology Data Exchange (ETDEWEB)
Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)
2012-10-02
We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.
Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...
African Journals Online (AJOL)
This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...
African Journals Online (AJOL)
This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Kernel maximum autocorrelation factor and minimum noise fraction transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...
7 CFR 51.2296 - Three-fourths half kernel.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...
7 CFR 981.401 - Adjusted kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...
7 CFR 51.1403 - Kernel color classification.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...
The Linux kernel as flexible product-line architecture
M. de Jonge (Merijn)
2002-01-01
textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what
Calculation of point isotropic buildup factors of gamma rays for water and lead
Directory of Open Access Journals (Sweden)
A. S. H.
2001-12-01
Full Text Available Exposure buildup factors for water and lead have been calculated by the Monte-Carlo method for an isotropic point source in an infinite homogeneous medium, using the latest cross secions available on the Internet. The types of interactions considered are ,photoelectric effect, incoherent (or bound-electron Compton. Scattering, coherent (or Rayleigh scattering and pair production. Fluorescence radiations have also been taken into acount for lead. For each material, calculations were made at 10 gamma ray energies in the 40 keV to 10 MeV range and up to penetration depths of 10 mean free paths at each energy point. The results presented in this paper can be considered as modified gamma ray exposure buildup factors and be used in radiation shielding designs.
Digital signal processing with kernel methods
Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo
2018-01-01
A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...
Parsimonious Wavelet Kernel Extreme Learning Machine
Directory of Open Access Journals (Sweden)
Wang Qin
2015-11-01
Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.
Ensemble Approach to Building Mercer Kernels
National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...
Effective exchange potentials for electronically inelastic scattering
International Nuclear Information System (INIS)
Schwenke, D.W.; Staszewska, G.; Truhlar, D.G.
1983-01-01
We propose new methods for solving the electron scattering close coupling equations employing equivalent local exchange potentials in place of the continuum-multiconfiguration-Hartree--Fock-type exchange kernels. The local exchange potentials are Hermitian. They have the correct symmetry for any symmetries of excited electronic states included in the close coupling expansion, and they have the same limit at very high energy as previously employed exchange potentials. Comparison of numerical calculations employing the new exchange potentials with the results obtained with the standard nonlocal exchange kernels shows that the new exchange potentials are more accurate than the local exchange approximations previously available for electronically inelastic scattering. We anticipate that the new approximations will be most useful for intermediate-energy electronically inelastic electron--molecule scattering
Investigating source processes of isotropic events
Chiang, Andrea
explosion. In contrast, recovering the announced explosive yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique. The estimation of seismic source parameters is dependent upon having a well-calibrated velocity model to compute the Green's functions for the inverse problem. Ideally, seismic velocity models are calibrated through broadband waveform modeling, however in regions of low seismicity velocity models derived from body or surface wave tomography may be employed. Whether a velocity model is 1D or 3D, or based on broadband seismic waveform modeling or the various tomographic techniques, the uncertainty in the velocity model can be the greatest source of error in moment tensor inversion. These errors have not been fully investigated for the nuclear discrimination problem. To study the effects of unmodeled structures on the moment tensor inversion, we set up a synthetic experiment where we produce synthetic seismograms for a 3D model (Moschetti et al., 2010) and invert these data using Green's functions computed with a 1D velocity mode (Song et al., 1996) to evaluate the recoverability of input solutions, paying particular attention to biases in the isotropic component. The synthetic experiment results indicate that the 1D model assumption is valid for moment tensor inversions at periods as short as 10 seconds for the 1D western U.S. model (Song et al., 1996). The correct earthquake mechanisms and source depth are recovered with statistically insignificant isotropic components as determined by the F-test. Shallow explosions are biased by the theoretical ISO-CLVD tradeoff but the tectonic release component remains low, and the tradeoff can be eliminated with constraints from P wave first motion. Path-calibration to the 1D model can reduce non-double-couple components in earthquakes, non-isotropic components in explosions and composite sources and improve
Fermionic NNLO contributions to Bhabha scattering
International Nuclear Information System (INIS)
Actis, S.; Riemann, T.; Czakon, M.; Uniwersytet Slaski, Katowice; Gluza, J.
2007-10-01
We derive the two-loop corrections to Bhabha scattering from heavy fermions using dispersion relations. The double-box contributions are expressed by three kernel functions. Convoluting the perturbative kernels with fermionic threshold functions or with hadronic data allows to determine numerical results for small electron mass m e , combined with arbitrary values of the fermion mass m f in the loop, m 2 e 2 f , or with hadronic insertions. We present numerical results for m f =m μ , m τ ,m top at typical small- and large-angle kinematics ranging from 1 GeV to 500 GeV. (orig.)
Control Transfer in Operating System Kernels
1994-05-13
microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating
Energy Technology Data Exchange (ETDEWEB)
Silva, Chinthaka M., E-mail: silvagw@ornl.gov [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee TN 37831-6223 (United States); Materials Science and Engineering, The University of Tennessee Knoxville, TN 37996-2100, United States. (United States); Lindemer, Terrence B.; Hunt, Rodney D.; Collins, Jack L.; Terrani, Kurt A.; Snead, Lance L. [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee TN 37831-6223 (United States)
2013-11-15
Silicon carbide (SiC) is suggested as an oxygen getter in UO{sub 2} kernels used for tristructural isotropic (TRISO) particle fuels and to prevent kernel migration during irradiation. Scanning electron microscopy and X-ray diffractometry analyses performed on sintered kernels verified that an internal gelation process can be used to incorporate SiC in UO{sub 2} fuel kernels. Even though the presence of UC in either argon (Ar) or Ar–4%H{sub 2} sintered samples suggested a lowering of the SiC up to 3.5–1.4 mol%, respectively, the presence of other silicon-related chemical phases indicates the preservation of silicon in the kernels during sintering process. UC formation was presumed to occur by two reactions. The first was by the reaction of SiC with its protective SiO{sub 2} oxide layer on SiC grains to produce volatile SiO and free carbon that subsequently reacted with UO{sub 2} to form UC. The second process was direct UO{sub 2} reaction with SiC grains to form SiO, CO, and UC. A slightly higher density and UC content were observed in the sample sintered in Ar–4%H{sub 2}, but both atmospheres produced kernels with ∼95% of theoretical density. It is suggested that incorporating CO in the sintering gas could prevent UC formation and preserve the initial SiC content.
Spin-isotropic continuum of spin excitations in antiferromagnetically ordered Fe1.07Te
Song, Yu; Lu, Xingye; Regnault, L.-P.; Su, Yixi; Lai, Hsin-Hua; Hu, Wen-Jun; Si, Qimiao; Dai, Pengcheng
2018-02-01
Unconventional superconductivity typically emerges in the presence of quasidegenerate ground states, and the associated intense fluctuations are likely responsible for generating the superconducting state. Here we use polarized neutron scattering to study the spin space anisotropy of spin excitations in Fe1.07Te exhibiting bicollinear antiferromagnetic (AF) order, the parent compound of FeTe1 -xSex superconductors. We confirm that the low-energy spin excitations are transverse spin waves, consistent with a local-moment origin of the bicollinear AF order. While the ordered moments lie in the a b plane in Fe1.07Te , it takes less energy for them to fluctuate out of plane, similar to BaFe2As2 and NaFeAs. At energies above E ≳20 meV, we find magnetic scattering to be dominated by an isotropic continuum that persists up to at least 50 meV. Although the isotropic spin excitations cannot be ascribed to spin waves from a long-range-ordered local-moment antiferromagnet, the continuum can result from the bicollinear magnetic order ground state of Fe1.07Te being quasidegenerate with plaquette magnetic order.
Isotropic nuclear graphites; the effect of neutron irradiation
International Nuclear Information System (INIS)
Lore, J.; Buscaillon, A.; Mottet, P.; Micaud, G.
1977-01-01
Several isotropic graphites have been manufactured using different forming processes and fillers such as needle coke, regular coke, or pitch coke. Their properties are described in this paper. Specimens of these products have been irradiated in the fast reactor Rapsodie between 400 to 1400 0 C, at fluences up to 1,7.10 21 n.cm -2 PHI.FG. The results show an isotropic behavior under neutron irradiation, but the induced dimensional changes are higher than those of isotropic coke graphites although they are lower than those of conventional extruded graphites made with the same coke
Process for the preparation of isotropic petroleum coke
International Nuclear Information System (INIS)
Kegler, W.H.; Huyser, M.E.
1975-01-01
A description is given of a process for preparing isotropic coke from oil residue charge. It includes blowing air into the residue until it reaches a softening temperature of around 49 to 116 deg C, the deferred coking of the residue having undergone blowing at a temperature of around 247 to 640 deg C, at a pressure between around 1.38x10 5 and 1.72x10 6 Pa, and the recovery of isotropic coke with a thermal expansion coefficient ratio under 1.5 approximately. The isotropic coke is used for preparing hexagonal graphite bars for nuclear reactor moderators [fr
Sudden Relaminarization and Lifetimes in Forced Isotropic Turbulence.
Linkmann, Moritz F; Morozov, Alexander
2015-09-25
We demonstrate an unexpected connection between isotropic turbulence and wall-bounded shear flows. We perform direct numerical simulations of isotropic turbulence forced at large scales at moderate Reynolds numbers and observe sudden transitions from a chaotic dynamics to a spatially simple flow, analogous to the laminar state in wall bounded shear flows. We find that the survival probabilities of turbulence are exponential and the typical lifetimes increase superexponentially with the Reynolds number. Our results suggest that both isotropic turbulence and wall-bounded shear flows qualitatively share the same phase-space dynamics.
Quantum tomography, phase-space observables and generalized Markov kernels
International Nuclear Information System (INIS)
Pellonpaeae, Juha-Pekka
2009-01-01
We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.
Absorption line profiles in a moving atmosphere - A single scattering linear perturbation theory
Hays, P. B.; Abreu, V. J.
1989-01-01
An integral equation is derived which linearly relates Doppler perturbations in the spectrum of atmospheric absorption features to the wind system which creates them. The perturbation theory is developed using a single scattering model, which is validated against a multiple scattering calculation. The nature and basic properties of the kernels in the integral equation are examined. It is concluded that the kernels are well behaved and that wind velocity profiles can be recovered using standard inversion techniques.
Sitompul, Monica Angelina
2015-01-01
Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...
van Beurden, M.C.; Setija, Irwan
2017-01-01
We present two adapted formulations, one tailored to isotropic media and one for general anisotropic media, of the normal vector field framework previously introduced to improve convergence near arbitrarily shaped material interfaces in spectral simulation methods for periodic scattering geometries.
CSIR Research Space (South Africa)
Joubert, S
2006-05-01
Full Text Available and Manufacturing TRANSVERSELY ISOTROPIC CYLINDER - 1 φ φ r z a x y Ω P P O u v w z ( )1 1 1 2 1 1 rrr rz rr zr r zrz zz rz u r r z r v r r z r w r r z r ϕ ϕϕ ϕϕ ϕϕ ϕ ϕ σσ σ σ σ ρ ϕ σσ σ σ ρ ϕ σσ σ σ ρ ϕ... ∂ ∂ ∂ + + + − = ∂ ∂ ∂ ∂∂ ∂ + + + = ∂ ∂ ∂ ∂∂ ∂ + + + = ∂ ∂ ∂ && && && 6 CSIR Material Science and Manufacturing TRANSVERSELY ISOTROPIC CYLINDER - 2 ( )1 1 1 2 1 1 rrr rz rr zr r zrz zz rz u r r z r v r r z r w r r z r ϕ ϕϕ ϕϕ ϕϕ ϕ ϕ σσ σ σ σ ρ ϕ σσ σ σ ρ ϕ σσ σ σ ρ ϕ...
Djebbi, Ramzi; Plessix, René -É douard; Alkhalifah, Tariq Ali
2016-01-01
In anisotropic media, several parameters govern the propagation of the compressional waves. To correctly invert surface recorded seismic data in anisotropic media, a multi-parameter inversion is required. However, a tradeoff between parameters
Photon beam convolution using polyenergetic energy deposition kernels
International Nuclear Information System (INIS)
Hoban, P.W.; Murray, D.C.; Round, W.H.
1994-01-01
In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, μ, to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio μ ab /μ as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author)
Weak convergence to isotropic complex [Formula: see text] random measure.
Wang, Jun; Li, Yunmeng; Sang, Liheng
2017-01-01
In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.
Metrical relationships in a standard triangle in an isotropic plane
Kolar-Šuper, R.; Kolar-Begović, Z.; Volenec, V.; Beban-Brkić, J.
2005-01-01
Each allowable triangle of an isotropic plane can be set in a standard position, in which it is possible to prove geometric properties analytically in a simplified and easier way by means of the algebraic theory developed in this paper.
Efficient anisotropic wavefield extrapolation using effective isotropic models
Alkhalifah, Tariq Ali; Ma, X.; Waheed, Umair bin; Zuberi, Mohammad
2013-01-01
Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented
Isotropic 2D quadrangle meshing with size and orientation control
Pellenard, Bertrand; Alliez, Pierre; Morvan, Jean-Marie
2011-01-01
We propose an approach for automatically generating isotropic 2D quadrangle meshes from arbitrary domains with a fine control over sizing and orientation of the elements. At the heart of our algorithm is an optimization procedure that, from a coarse
Scanning anisotropy parameters in horizontal transversely isotropic media
Masmoudi, Nabil; Stovas, Alexey; Alkhalifah, Tariq Ali
2016-01-01
in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous
Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM
Directory of Open Access Journals (Sweden)
Chenchao Zhao
2018-01-01
Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.
Guo, Qi; Shen, Shu-Ting
2016-04-29
There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.
Crack Tip Creep Deformation Behavior in Transversely Isotropic Materials
International Nuclear Information System (INIS)
Ma, Young Wha; Yoon, Kee Bong
2009-01-01
Theoretical mechanics analysis and finite element simulation were performed to investigate creep deformation behavior at the crack tip of transversely isotropic materials under small scale creep (SCC) conditions. Mechanical behavior of material was assumed as an elastic-2 nd creep, which elastic modulus ( E ), Poisson's ratio (v ) and creep stress exponent ( n ) were isotropic and creep coefficient was only transversely isotropic. Based on the mechanics analysis for material behavior, a constitutive equation for transversely isotropic creep behavior was formulated and an equivalent creep coefficient was proposed under plain strain conditions. Creep deformation behavior at the crack tip was investigated through the finite element analysis. The results of the finite element analysis showed that creep deformation in transversely isotropic materials is dominant at the rear of the crack-tip. This result was more obvious when a load was applied to principal axis of anisotropy. Based on the results of the mechanics analysis and the finite element simulation, a corrected estimation scheme of the creep zone size was proposed in order to evaluate the creep deformation behavior at the crack tip of transversely isotropic creeping materials
Multiple scattering processes: inverse and direct
International Nuclear Information System (INIS)
Kagiwada, H.H.; Kalaba, R.; Ueno, S.
1975-01-01
The purpose of the work is to formulate inverse problems in radiative transfer, to introduce the functions b and h as parameters of internal intensity in homogeneous slabs, and to derive initial value problems to replace the more traditional boundary value problems and integral equations of multiple scattering with high computational efficiency. The discussion covers multiple scattering processes in a one-dimensional medium; isotropic scattering in homogeneous slabs illuminated by parallel rays of radiation; the theory of functions b and h in homogeneous slabs illuminated by isotropic sources of radiation either at the top or at the bottom; inverse and direct problems of multiple scattering in slabs including internal sources; multiple scattering in inhomogeneous media, with particular reference to inverse problems for estimation of layers and total thickness of inhomogeneous slabs and to multiple scattering problems with Lambert's law and specular reflectors underlying slabs; and anisotropic scattering with reduction of the number of relevant arguments through axially symmetric fields and expansion in Legendre functions. Gaussian quadrature data for a seven point formula, a FORTRAN program for computing the functions b and h, and tables of these functions supplement the text
Sigma set scattering equations in nuclear reaction theory
International Nuclear Information System (INIS)
Kowalski, K.L.; Picklesimer, A.
1982-01-01
The practical applications of partially summed versions of the Rosenberg equations involving only special subsets (sigma sets) of the physical amplitudes are investigated with special attention to the Pauli principle. The requisite properties of the transformations from the pair labels to the set of partitions labeling the sigma set of asymptotic channels are established. New, well-defined, scattering integral equations for the antisymmetrized transition operators are found which possess much less coupling among the physically distinct channels than hitherto expected for equations with kernels of equal complexity. In several cases of physical interest in nuclear physics, a single connected-kernel equation is obtained for the relevant antisymmetrized elastic scattering amplitude
Aflatoxin contamination of developing corn kernels.
Amer, M A
2005-01-01
Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.
Analog forecasting with dynamics-adapted kernels
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
Directory of Open Access Journals (Sweden)
Xin Zhao
2017-01-01
Full Text Available Fungi infection in maize kernels is a major concern worldwide due to its toxic metabolites such as mycotoxins, thus it is necessary to develop appropriate techniques for early detection of fungi infection in maize kernels. Thirty-six sterilised maize kernels were inoculated each day with Aspergillus parasiticus from one to seven days, and then seven groups (D1, D2, D3, D4, D5, D6, D7 were determined based on the incubated time. Another 36 sterilised kernels without inoculation with fungi were taken as control (DC. Hyperspectral images of all kernels were acquired within spectral range of 921–2529 nm. Background, labels and bad pixels were removed using principal component analysis (PCA and masking. Separability computation for discrimination of fungal contamination levels indicated that the model based on the data of the germ region of individual kernels performed more effectively than on that of the whole kernels. Moreover, samples with a two-day interval were separable. Thus, four groups, DC, D1–2 (the group consisted of D1 and D2, D3–4 (D3 and D4, and D5–7 (D5, D6, and D7, were defined for subsequent classification. Two separate sample sets were prepared to verify the influence on a classification model caused by germ orientation, that is, germ up and the mixture of germ up and down with 1:1. Two smooth preprocessing methods (Savitzky-Golay smoothing, moving average smoothing and three scatter-correction methods (normalization, standard normal variate, and multiple scatter correction were compared, according to the performance of the classification model built by support vector machines (SVM. The best model for kernels with germ up showed the promising results with accuracies of 97.92% and 91.67% for calibration and validation data set, respectively, while accuracies of the best model for samples of the mixed kernels were 95.83% and 84.38%. Moreover, five wavelengths (1145, 1408, 1935, 2103, and 2383 nm were selected as the key
OS X and iOS Kernel Programming
Halvorsen, Ole Henry
2011-01-01
OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i
The Classification of Diabetes Mellitus Using Kernel k-means
Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.
2018-01-01
Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.
Object classification and detection with context kernel descriptors
DEFF Research Database (Denmark)
Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping
2014-01-01
Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...
Evaluation of a scattering correction method for high energy tomography
Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel
2018-01-01
One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where
THERMAL: A routine designed to calculate neutron thermal scattering
International Nuclear Information System (INIS)
Cullen, D.E.
1995-01-01
THERMAL is designed to calculate neutron thermal scattering that is isotropic in the center of mass system. At low energy thermal motion will be included. At high energies the target nuclei are assumed to be stationary. The point of transition between low and high energies has been defined to insure a smooth transition. It is assumed that at low energy the elastic cross section is constant in the center of mass system. At high energy the cross section can be of any form. You can use this routine for all energies where the elastic scattering is isotropic in the center of mass system. In most materials this will be a fairly high energy
Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method
Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing
2017-01-01
Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120
Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu
2017-12-15
Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.
Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates
International Nuclear Information System (INIS)
Hanft, J.M.; Jones, R.J.
1986-01-01
This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose
Fluidization calculation on nuclear fuel kernel coating
International Nuclear Information System (INIS)
Sukarsono; Wardaya; Indra-Suryawan
1996-01-01
The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3
Comparative Analysis of Kernel Methods for Statistical Shape Learning
National Research Council Canada - National Science Library
Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen
2006-01-01
.... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
Influence of differently processed mango seed kernel meal on ...
African Journals Online (AJOL)
Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.
On methods to increase the security of the Linux kernel
International Nuclear Information System (INIS)
Matvejchikov, I.V.
2014-01-01
Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru
Linear and kernel methods for multi- and hypervariate change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Canty, Morton J.
2010-01-01
. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...
Kernel methods in orthogonalization of multi- and hypervariate data
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...
International Nuclear Information System (INIS)
Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.
2000-01-01
The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function
Relationship between attenuation coefficients and dose-spread kernels
International Nuclear Information System (INIS)
Boyer, A.L.
1988-01-01
Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
International Nuclear Information System (INIS)
Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric
2010-01-01
Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Consistent Estimation of Pricing Kernels from Noisy Price Data
Vladislav Kargin
2003-01-01
If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.
International Nuclear Information System (INIS)
Lee, Yoon Hee; Cho, Bum Hee; Cho, Nam Zin
2016-01-01
As a type of accident-tolerant fuel, fully ceramic microencapsulated (FCM) fuel was proposed after the Fukushima accident in Japan. The FCM fuel consists of tristructural isotropic particles randomly dispersed in a silicon carbide (SiC) matrix. For a fuel element with such high heterogeneity, we have proposed a two-temperature homogenized model using the particle transport Monte Carlo method for the heat conduction problem. This model distinguishes between fuel-kernel and SiC matrix temperatures. Moreover, the obtained temperature profiles are more realistic than those of other models. In Part I of the paper, homogenized parameters for the FCM fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure are obtained by (1) matching steady-state analytic solutions of the model with the results of particle transport Monte Carlo method for heat conduction problems, and (2) preserving total enthalpies in fuel kernels and SiC matrix. The homogenized parameters have two desirable properties: (1) they are insensitive to boundary conditions such as coolant bulk temperatures and thickness of cladding, and (2) they are independent of operating power density. By performing the Monte Carlo calculations with the temperature-dependent thermal properties of the constituent materials of the FCM fuel, temperature-dependent homogenized parameters are obtained
Angle gathers in wave-equation imaging for transversely isotropic media
Alkhalifah, Tariq Ali; Fomel, Sergey B.
2010-01-01
In recent years, wave-equation imaged data are often presented in common-image angle-domain gathers as a decomposition in the scattering angle at the reflector, which provide a natural access to analysing migration velocities and amplitudes. In the case of anisotropic media, the importance of angle gathers is enhanced by the need to properly estimate multiple anisotropic parameters for a proper representation of the medium. We extract angle gathers for each downward-continuation step from converting offset-frequency planes into angle-frequency planes simultaneously with applying the imaging condition in a transversely isotropic with a vertical symmetry axis (VTI) medium. The analytic equations, though cumbersome, are exact within the framework of the acoustic approximation. They are also easily programmable and show that angle gather mapping in the case of anisotropic media differs from its isotropic counterpart, with the difference depending mainly on the strength of anisotropy. Synthetic examples demonstrate the importance of including anisotropy in the angle gather generation as mapping of the energy is negatively altered otherwise. In the case of a titled axis of symmetry (TTI), the same VTI formulation is applicable but requires a rotation of the wavenumbers. © 2010 European Association of Geoscientists & Engineers.
The Galactic Isotropic γ-ray Background and Implications for Dark Matter
Campbell, Sheldon S.; Kwa, Anna; Kaplinghat, Manoj
2018-06-01
We present an analysis of the radial angular profile of the galacto-isotropic (GI) γ-ray flux-the statistically uniform flux in angular annuli centred on the Galactic centre. Two different approaches are used to measure the GI flux profile in 85 months of Fermi-LAT data: the BDS statistical method which identifies spatial correlations, and a new Poisson ordered-pixel method which identifies non-Poisson contributions. Both methods produce similar GI flux profiles. The GI flux profile is well-described by an existing model of bremsstrahlung, π0 production, inverse Compton scattering, and the isotropic background. Discrepancies with data in our full-sky model are not present in the GI component, and are therefore due to mis-modelling of the non-GI emission. Dark matter annihilation constraints based solely on the observed GI profile are close to the thermal WIMP cross section below 100 GeV, for fixed models of the dark matter density profile and astrophysical γ-ray foregrounds. Refined measurements of the GI profile are expected to improve these constraints by a factor of a few.
Maruyama, T.; Kaito, T.; Onose, S.; Shibahara, I.
1995-08-01
Thirteen kinds of isotropic graphites with different density and maximum grain size were irradiated in the experimental fast reactor "JOYO" to fluences from 2.11 to 2.86 × 10 26 n/m 2 ( E > 0.1 MeV) at temperatures from 549 to 597°C. Postirradiation examination was carried out on the dimensional changes, elastic modulus, and thermal conductivity of these materials. Dimensional change results indicate that the graphites irradiated at lower fluences showed shrinkage upon neutron irradiation followed by increase with increasing neutron fluences, irrespective of differences in material parameters. The Young's modulus and Poisson's ratio increased by two to three times the unirradiated values. The large scatter found in Poisson's ratio of unirradiated materials became very small and a linear dependence on density was obtained after irradiation. The thermal conductivity decreased to one-fifth to one-tenth of unirradiated values, with a negligible change in specific heat. The results of postirradiation examination indicated that the changes in physical properties of high density, isotropic graphites were mainly dominated by the irradiation condition rather than their material parameters. Namely, the effects of irradiation induced defects on physical properties of heavily neutron-irradiated graphites are much larger than that of defects associated with as-fabricated specimens.
International Nuclear Information System (INIS)
Maruyama, T.; Kaito, T.; Onose, S.; Shibahara, I.
1995-01-01
Thirteen kinds of isotropic graphites with different density and maximum grain size were irradiated in the experimental fast reactor ''JOYO'' to fluences from 2.11 to 2.86x10 26 n/m 2 (E>0.1 MeV) at temperatures from 549 to 597 C. Postirradiation examination was carried out on the dimensional changes, elastic modulus, and thermal conductivity of these materials. Dimensional change results indicate that the graphites irradiated at lower fluences showed shrinkage upon neutron irradiation followed by increase with increasing neutron fluences, irrespective of differences in material parameters. The Young's modulus and Poisson's ratio increased by two to three times the unirradiated values. The large scatter found in Poisson's ratio of unirradiated materials became very small and a linear dependence on density was obtained after irradiation. The thermal conductivity decreased to one-fifth to one-tenth of unirradiated values, with a negligible change in specific heat. The results of postirradiation examination indicated that the changes in physical properties of high density, isotropic graphites were mainly dominated by the irradiation condition rather than their material parameters. Namely, the effects of irradiation induced defects on physical properties of heavily neutron-irradiated graphites are much larger than that of defects associated with as-fabricated specimens. (orig.)
Constraints on light WIMP candidates from the isotropic diffuse gamma-ray emission
International Nuclear Information System (INIS)
Arina, Chiara; Tytgat, Michel H.G.
2011-01-01
Motivated by the measurements reported by direct detection experiments, most notably DAMA, CDMS-II, CoGeNT and Xenon10/100, we study further the constraints that might be set on some light dark matter candidates, M DM ∼ few GeV, using the Fermi-LAT data on the isotropic gamma-ray diffuse emission. In particular, we consider a Dirac fermion singlet interacting through a new Z' gauge boson, and a scalar singlet S interacting through the Higgs portal. Both candidates are WIMP (Weakly Interacting Massive Particles), i.e. they have an annihilation cross-section in the pbarn range. Also they may both have a spin-independent elastic cross section on nucleons in the range required by direct detection experiments. Although being generic WIMP candidates, because they have different interactions with Standard Model particles, their phenomenology regarding the isotropic diffuse gamma-ray emission is quite distinct. In the case of the scalar singlet, the one-to-one correspondence between its annihilation cross-section and its spin-independent elastic scattering cross-section permits to express the constraints from the Fermi-LAT data in the direct detection exclusion plot, σ n 0 −M DM . Depending on the astrophysics, we argue that it is possible to exclude the singlet scalar dark matter candidate at 95% confidence level. The constraints on the Dirac singlet interacting through a Z' are comparatively weaker
An efficient cost function for the optimization of an n-layered isotropic cloaked cylinder
International Nuclear Information System (INIS)
Paul, Jason V; Collins, Peter J; Coutu, Ronald A Jr
2013-01-01
In this paper, we present an efficient cost function for optimizing n-layered isotropic cloaked cylinders. Cost function efficiency is achieved by extracting the expression for the angle independent scatterer contribution of an associated Green's function. Therefore, since this cost function is not a function of angle, accounting for every bistatic angle is not necessary and thus more efficient than other cost functions. With this general and efficient cost function, isotropic cloaked cylinders can be optimized for many layers and material parameters. To demonstrate this, optimized cloaked cylinders made of 10, 20 and 30 equal thickness layers are presented for TE and TM incidence. Furthermore, we study the effect layer thickness has on optimized cloaks by optimizing a 10 layer cloaked cylinder over the material parameters and individual layer thicknesses. The optimized material parameters in this effort do not exhibit the dual nature that is evident in the ideal transformation optics design. This indicates that the inevitable field penetration and subsequent PEC boundary condition at the cylinder must be taken into account for an optimal cloaked cylinder design. Furthermore, a more effective cloaked cylinder can be designed by optimizing both layer thickness and material parameters than by additional layers alone. (paper)
Angle gathers in wave-equation imaging for transversely isotropic media
Alkhalifah, Tariq Ali
2010-11-12
In recent years, wave-equation imaged data are often presented in common-image angle-domain gathers as a decomposition in the scattering angle at the reflector, which provide a natural access to analysing migration velocities and amplitudes. In the case of anisotropic media, the importance of angle gathers is enhanced by the need to properly estimate multiple anisotropic parameters for a proper representation of the medium. We extract angle gathers for each downward-continuation step from converting offset-frequency planes into angle-frequency planes simultaneously with applying the imaging condition in a transversely isotropic with a vertical symmetry axis (VTI) medium. The analytic equations, though cumbersome, are exact within the framework of the acoustic approximation. They are also easily programmable and show that angle gather mapping in the case of anisotropic media differs from its isotropic counterpart, with the difference depending mainly on the strength of anisotropy. Synthetic examples demonstrate the importance of including anisotropy in the angle gather generation as mapping of the energy is negatively altered otherwise. In the case of a titled axis of symmetry (TTI), the same VTI formulation is applicable but requires a rotation of the wavenumbers. © 2010 European Association of Geoscientists & Engineers.
Visualization and computer graphics on isotropically emissive volumetric displays.
Mora, Benjamin; Maciejewski, Ross; Chen, Min; Ebert, David S
2009-01-01
The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing X-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D dataset or object as the input, creates an intermediate light field, and outputs a special 3D volume dataset called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.
Quantum logic in dagger kernel categories
Heunen, C.; Jacobs, B.P.F.
2009-01-01
This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial
Quantum logic in dagger kernel categories
Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P.
2011-01-01
This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial
Symbol recognition with kernel density matching.
Zhang, Wan; Wenyin, Liu; Zhang, Kun
2006-12-01
We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.
Flexible Scheduling in Multimedia Kernels: An Overview
Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.
1999-01-01
Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more
Reproducing kernel Hilbert spaces of Gaussian priors
Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.
2008-01-01
We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described
A synthesis of empirical plant dispersal kernels
Czech Academy of Sciences Publication Activity Database
Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.
2017-01-01
Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016
Analytic continuation of weighted Bergman kernels
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2010-01-01
Roč. 94, č. 6 (2010), s. 622-650 ISSN 0021-7824 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * analytic continuation * Toeplitz operator Subject RIV: BA - General Mathematics Impact factor: 1.450, year: 2010 http://www.sciencedirect.com/science/article/pii/S0021782410000942
On convergence of kernel learning estimators
Norkin, V.I.; Keyzer, M.A.
2009-01-01
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability
Analytic properties of the Virasoro modular kernel
Energy Technology Data Exchange (ETDEWEB)
Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)
2017-06-15
On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)
Kernel based subspace projection of hyperspectral images
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten
In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Enhanced gluten properties in soft kernel durum wheat
Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...
Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...
African Journals Online (AJOL)
Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...
Stable Kernel Representations as Nonlinear Left Coprime Factorizations
Paice, A.D.B.; Schaft, A.J. van der
1994-01-01
A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel
7 CFR 981.60 - Determination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...
21 CFR 176.350 - Tamarind seed kernel powder.
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...
End-use quality of soft kernel durum wheat
Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...
Heat kernel analysis for Bessel operators on symmetric cones
DEFF Research Database (Denmark)
Möllers, Jan
2014-01-01
. The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...
A Fast and Simple Graph Kernel for RDF
de Vries, G.K.D.; de Rooij, S.
2013-01-01
In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster
7 CFR 981.61 - Redetermination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...
Single pass kernel k-means clustering method
Indian Academy of Sciences (India)
paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.
Isotropic quantum walks on lattices and the Weyl equation
D'Ariano, Giacomo Mauro; Erba, Marco; Perinotti, Paolo
2017-12-01
We present a thorough classification of the isotropic quantum walks on lattices of dimension d =1 ,2 ,3 with a coin system of dimension s =2 . For d =3 there exist two isotropic walks, namely, the Weyl quantum walks presented in the work of D'Ariano and Perinotti [G. M. D'Ariano and P. Perinotti, Phys. Rev. A 90, 062106 (2014), 10.1103/PhysRevA.90.062106], resulting in the derivation of the Weyl equation from informational principles. The present analysis, via a crucial use of isotropy, is significantly shorter and avoids a superfluous technical assumption, making the result completely general.
3D geometrically isotropic metamaterial for telecom wavelengths
DEFF Research Database (Denmark)
Malureanu, Radu; Andryieuski, Andrei; Lavrinenko, Andrei
2009-01-01
of the unit cell is not infinitely small, certain geometrical constraints have to be fulfilled to obtain an isotropic response of the material [3]. These conditions and the metal behaviour close to the plasma frequency increase the design complexity. Our unit cell is composed of two main parts. The first part...... is obtained in a certain bandwidth. The proposed unit cell has the cubic point group of symmetry and being repeatedly placed in space can effectively reveal isotropic optical properties. We use the CST commercial software to characterise the “cube-in-cage” structure. Reflection and transmission spectra...
Exposure buildup factors for a cobalt-60 point isotropic source for single and two layer slabs
International Nuclear Information System (INIS)
Chakarova, R.
1992-01-01
Exposure buildup factors for point isotropic cobalt-60 sources are calculated by the Monte Carlo method with statistical errors ranging from 1.5 to 7% for 1-5 mean free paths (mfp) thick water and iron single slabs and for 1 and 2 mfp iron layers followed by water layers 1-5 mfp thick. The computations take into account Compton scattering. The Monte Carlo data for single slab geometries are approximated by Geometric Progression formula. Kalos's formula using the calculated single slab buildup factors may be applied to reproduce the data for two-layered slabs. The presented results and discussion may help when choosing the manner in which the radiation field gamma irradiation units will be described. (author)
Software correction of scatter coincidence in positron CT
International Nuclear Information System (INIS)
Endo, M.; Iinuma, T.A.
1984-01-01
This paper describes a software correction of scatter coincidence in positron CT which is based on an estimation of scatter projections from true projections by an integral transform. Kernels for the integral transform are projected distributions of scatter coincidences for a line source at different positions in a water phantom and are calculated by Klein-Nishina's formula. True projections of any composite object can be determined from measured projections by iterative applications of the integral transform. The correction method was tested in computer simulations and phantom experiments with Positologica. The results showed that effects of scatter coincidence are not negligible in the quantitation of images, but the correction reduces them significantly. (orig.)
Scuba: scalable kernel-based gene prioritization.
Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio
2018-01-25
The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .
International Nuclear Information System (INIS)
Vogt, A; Soar, G.; Vermaseren, J.A.M.
2010-01-01
We have studied the physical evolution kernels for nine non-singlet observables in deep-inelastic scattering (DIS), semi-inclusive e + e - annihilation and the Drell-Yan (DY) process, and for the flavour-singlet case of the photon- and heavy-top Higgs-exchange structure functions (F 2 , F φ ) in DIS. All known contributions to these kernels show an only single-logarithmic large-x enhancement at all powers of (1-x). Conjecturing that this behaviour persists to (all) higher orders, we have predicted the highest three (DY: two) double logarithms of the higher-order non-singlet coefficient functions and of the four-loop singlet splitting functions. The coefficient-function predictions can be written as exponentiations of 1/N-suppressed contributions in Mellin-N space which, however, are less predictive than the well-known exponentiation of the ln k N terms. (orig.)
Quantum scattering at low energies
DEFF Research Database (Denmark)
Derezinski, Jan; Skibsted, Erik
For a class of negative slowly decaying potentials, including with , we study the quantum mechanical scattering theory in the low-energy regime. Using modifiers of the Isozaki--Kitada type we show that scattering theory is well behaved on the {\\it whole} continuous spectrum of the Hamiltonian......, including the energy . We show that the --matrices are well-defined and strongly continuous down to the zero energy threshold. Similarly, we prove that the wave matrices and generalized eigenfunctions are norm continuous down to the zero energy if we use appropriate weighted spaces. These results are used...... from positive energies to the limiting energy . This change corresponds to the behaviour of the classical orbits. Under stronger conditions we extract the leading term of the asymptotics of the kernel of at its singularities; this leading term defines a Fourier integral operator in the sense...
Neutron cross sections of cryogenic materials: a synthetic kernel for molecular solids
International Nuclear Information System (INIS)
Granada, J.R.; Gillette, V.H.; Petriw, S.; Cantargi, F.; Pepe, M.E.; Sbaffoni, M.M.
2004-01-01
A new synthetic scattering function aimed at the description of the interaction of thermal neutrons with molecular solids has been developed. At low incident neutron energies, both lattice modes and molecular rotations are specifically accounted for, through an expansion of the scattering law in few phonon terms. Simple representations of the molecular dynamical modes are used, in order to produce a fairly accurate description of neutron scattering kernels and cross sections with a minimum set of input data. As the neutron energies become much larger than that corresponding to the characteristic Debye temperature and to the rotational energies of the molecular solid, the 'phonon formulation' transforms into the traditional description for molecular gases. (orig.)
Kernel based orthogonalization for change detection in hyperspectral images
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...
A laser optical method for detecting corn kernel defects
Energy Technology Data Exchange (ETDEWEB)
Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.
1984-01-01
An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws
Directory of Open Access Journals (Sweden)
Mohammed D. ABDULMALIK
2008-06-01
Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.
Difference between standard and quasi-conformal BFKL kernels
International Nuclear Information System (INIS)
Fadin, V.S.; Fiore, R.; Papa, A.
2012-01-01
As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.
Lagrangian statistics of particle pairs in homogeneous isotropic turbulence
Biferale, L.; Boffeta, G.; Celani, A.; Devenish, B.J.; Lanotte, A.; Toschi, F.
2005-01-01
We present a detailed investigation of the particle pair separation process in homogeneous isotropic turbulence. We use data from direct numerical simulations up to R????280 following the evolution of about two million passive tracers advected by the flow over a time span of about three decades. We
Geometry of the isotropic oscillator driven by the conformal mode
Energy Technology Data Exchange (ETDEWEB)
Galajinsky, Anton [Tomsk Polytechnic University, School of Physics, Tomsk (Russian Federation)
2018-01-15
Geometrization of a Lagrangian conservative system typically amounts to reformulating its equations of motion as the geodesic equations in a properly chosen curved spacetime. The conventional methods include the Jacobi metric and the Eisenhart lift. In this work, a modification of the Eisenhart lift is proposed which describes the isotropic oscillator in arbitrary dimension driven by the one-dimensional conformal mode. (orig.)
Seeing is believing : communication performance under isotropic teleconferencing conditions
Werkhoven, P.J.; Schraagen, J.M.C.; Punte, P.A.J.
2001-01-01
The visual component of conversational media such as videoconferencing systems communicates important non-verbal information such as facial expressions, gestures, posture and gaze. Unlike the other cues, selective gaze depends critically on the configuration of cameras and monitors. Under isotropic
A simple mechanical model for the isotropic harmonic oscillator
International Nuclear Information System (INIS)
Nita, Gelu M
2010-01-01
A constrained elastic pendulum is proposed as a simple mechanical model for the isotropic harmonic oscillator. The conceptual and mathematical simplicity of this model recommends it as an effective pedagogical tool in teaching basic physics concepts at advanced high school and introductory undergraduate course levels.
Homogenization and isotropization of an inflationary cosmological model
International Nuclear Information System (INIS)
Barrow, J.D.; Groen, Oe.; Oslo Univ.
1986-01-01
A member of the class of anisotropic and inhomogeneous cosmological models constructed by Wainwright and Goode is investigated. It is shown to describe a universe containing a scalar field which is minimally coupled to gravitation and a positive cosmological constant. It is shown that this cosmological model evolves exponentially rapidly towards the homogeneous and isotropic de Sitter universe model. (orig.)
Isotropic gates in large gamma detector arrays versus angular distributions
International Nuclear Information System (INIS)
Iacob, V.E.; Duchene, G.
1997-01-01
The quality of the angular distribution information extracted from high-fold gamma-gamma coincidence events is analyzed. It is shown that a correct quasi-isotropic gate setting, available at the modern large gamma-ray detector arrays, essentially preserves the quality of the angular information. (orig.)
Higher gradient expansion for linear isotropic peridynamic materials
Czech Academy of Sciences Publication Activity Database
Šilhavý, Miroslav
2017-01-01
Roč. 22, č. 6 (2017), s. 1483-1493 ISSN 1081-2865 Institutional support: RVO:67985840 Keywords : peridynamics * higher-grade theories * non-local elastic-material model * representation theorems for isotropic functions Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 2.953, year: 2016 http://journals.sagepub.com/doi/10.1177/1081286516637235
Higher gradient expansion for linear isotropic peridynamic materials
Czech Academy of Sciences Publication Activity Database
Šilhavý, Miroslav
2017-01-01
Roč. 22, č. 6 (2017), s. 1483-1493 ISSN 1081-2865 Institutional support: RVO:67985840 Keywords : peridynamics * higher-grade theories * non-local elastic-material model * representation theorems for isotropic functions Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 2.953, year: 2016 http:// journals .sagepub.com/doi/10.1177/1081286516637235
direct method of analysis of an isotropic rectangular plate direct
African Journals Online (AJOL)
eobe
This work evaluates the static analysis of an isotropic rectangular plate with various the static analysis ... method according to Ritz is used to obtain the total potential energy of the plate by employing the used to ..... for rectangular plates analysis, as the behavior of the ... results obtained by previous research work that used.
Transformation optics, isotropic chiral media and non-Riemannian geometry
International Nuclear Information System (INIS)
Horsley, S A R
2011-01-01
The geometrical interpretation of electromagnetism in transparent media (transformation optics) is extended to include chiral media that are isotropic but inhomogeneous. It was found that such media may be described through introducing the non-Riemannian geometrical property of torsion into the Maxwell equations, and it is shown how such an interpretation may be applied to the design of optical devices.
Isotropic cosmic expansion and the Rubin-Ford effect
International Nuclear Information System (INIS)
Fall, S.M.; Jones, B.J.T.
1976-01-01
It is shown that the Rubin-Ford data (Astrophys. J. Lett. 183:L111 (1973)), often taken as evidence for large scale anisotropic cosmic expansion, probably only reflect the inhomogeneous distribution of galaxies in the region of the sample. The data presented are consistent with isotropic expansion, an unperturbed galaxy velocity field, and hence a low density Universe. (author)
About statistical process contribution to elastic diffraction scattering
International Nuclear Information System (INIS)
Ismanov, E.I.; Dzhuraev, Sh. Kh.; Paluanov, B.K.
1999-01-01
The experimental data on angular distribution show two basic properties. The first one is the presence of back and front peaks. The second one is the angular isotropic distribution near 90 degree, and has a big energy dependence. Different models for partial amplitudes a dl of the diffraction statistical scattering, particularly the model with Gaussian and exponential density distribution, were considered. The experimental data on pp-scattering were analyzed using the examined models
Rayleigh scattering under light-atom coherent interaction
Takamizawa, Akifumi; Shimoda, Koichi
2012-01-01
Semi-classical calculation of an oscillating dipole induced in a two-level atom indicates that spherical radiation from the dipole under coherent interaction, i.e., Rayleigh scattering, has a power level comparable to that of spontaneous emission resulting from an incoherent process. Whereas spontaneous emission is nearly isotropic and has random polarization generally, Rayleigh scattering is strongly anisotropic and polarized in association with incident light. In the case where Rabi frequen...
Percolation-enhanced nonlinear scattering from semicontinuous metal films
Breit, M.; von Plessen, G.; Feldmann, J.; Podolskiy, V. A.; Sarychev, A. K.; Shalaev, V. M.; Gresillon, S.; Rivoal, J. C.; Gadenne, P.
2001-03-01
Strongly enhanced second-harmonic generation (SHG), which is characterized by nearly isotropic distribution, is observed for gold-glass films near the percolation threshold. The diffuse-like SHG scattering, which can be thought of as nonlinear critical opalescence, is in sharp contrast with highly collimated linear reflection and transmission from these nanostructured semicontinuous metal films. Our observations, which can be explained by giant fluctuations of local nonlinear sources for SHG, verify recent predictions of percolation-enhanced nonlinear scattering.
Uniqueness in inverse elastic scattering with finitely many incident waves
International Nuclear Information System (INIS)
Elschner, Johannes; Yamamoto, Masahiro
2009-01-01
We consider the third and fourth exterior boundary value problems of linear isotropic elasticity and present uniqueness results for the corresponding inverse scattering problems with polyhedral-type obstacles and a finite number of incident plane elastic waves. Our approach is based on a reflection principle for the Navier equation. (orig.)
Small-angle neutron scattering in materials science - an introduction
International Nuclear Information System (INIS)
Fratzl, P.
1996-01-01
The basic principles of the application of small-angle neutron scattering to materials research are summarized. The text focusses on the classical methods of data evaluation for isotropic and for anisotropic materials. Some examples of applications to the study of alloys, porous materials, composites and other complex materials are given. (author) 9 figs., 38 refs
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Kernel-based tests for joint independence
DEFF Research Database (Denmark)
Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard
2018-01-01
if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...
Wilson Dslash Kernel From Lattice QCD Optimization
Energy Technology Data Exchange (ETDEWEB)
Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
International Nuclear Information System (INIS)
Hategan, Cornel; Comisel, Horia; Ionescu, Remus A.
2004-01-01
The quasiresonant scattering consists from a single channel resonance coupled by direct interaction transitions to some competing reaction channels. A description of quasiresonant Scattering, in terms of generalized reduced K-, R- and S- Matrix, is developed in this work. The quasiresonance's decay width is, due to channels coupling, smaller than the width of the ancestral single channel resonance (resonance's direct compression). (author)
Donne, A. J. H.
1994-01-01
Thomson scattering is a very powerful diagnostic which is applied at nearly every magnetic confinement device. Depending on the experimental conditions different plasma parameters can be diagnosed. When the wave vector is much larger than the plasma Debye length, the total scattered power is
A Kernel for Protein Secondary Structure Prediction
Guermeur , Yann; Lifchitz , Alain; Vert , Régis
2004-01-01
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...
Scalar contribution to the BFKL kernel
International Nuclear Information System (INIS)
Gerasimov, R. E.; Fadin, V. S.
2010-01-01
The contribution of scalar particles to the kernel of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation is calculated. A great cancellation between the virtual and real parts of this contribution, analogous to the cancellation in the quark contribution in QCD, is observed. The reason of this cancellation is discovered. This reason has a common nature for particles with any spin. Understanding of this reason permits to obtain the total contribution without the complicated calculations, which are necessary for finding separate pieces.
Weighted Bergman Kernels for Logarithmic Weights
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2010-01-01
Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/
Heat kernels and zeta functions on fractals
International Nuclear Information System (INIS)
Dunne, Gerald V
2012-01-01
On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)
International Nuclear Information System (INIS)
Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus
2011-01-01
The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.
Exploiting graph kernels for high performance biomedical relation extraction.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
2018-01-30
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM
Identification of Fusarium damaged wheat kernels using image analysis
Directory of Open Access Journals (Sweden)
Ondřej Jirsa
2011-01-01
Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
A new method by steering kernel-based Richardson–Lucy algorithm for neutron imaging restoration
International Nuclear Information System (INIS)
Qiao, Shuang; Wang, Qiao; Sun, Jia-ning; Huang, Ji-peng
2014-01-01
Motivated by industrial applications, neutron radiography has become a powerful tool for non-destructive investigation techniques. However, resulted from a combined effect of neutron flux, collimated beam, limited spatial resolution of detector and scattering, etc., the images made with neutrons are degraded severely by blur and noise. For dealing with it, by integrating steering kernel regression into Richardson–Lucy approach, we present a novel restoration method in this paper, which is capable of suppressing noise while restoring details of the blurred imaging result efficiently. Experimental results show that compared with the other methods, the proposed method can improve the restoration quality both visually and quantitatively
Kernel based subspace projection of near infrared hyperspectral images of maize kernels
DEFF Research Database (Denmark)
Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben
2009-01-01
In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...
International Nuclear Information System (INIS)
Sitenko, A.
1991-01-01
This book emerged out of graduate lectures given by the author at the University of Kiev and is intended as a graduate text. The fundamentals of non-relativistic quantum scattering theory are covered, including some topics, such as the phase-function formalism, separable potentials, and inverse scattering, which are not always coverded in textbooks on scattering theory. Criticisms of the text are minor, but the reviewer feels an inadequate index is provided and the citing of references in the Russian language is a hindrance in a graduate text
Isotropic Optical Mouse Placement for Mobile Robot Velocity Estimation
Directory of Open Access Journals (Sweden)
Sungbok Kim
2014-06-01
Full Text Available This paper presents the isotropic placement of multiple optical mice for the velocity estimation of a mobile robot. It is assumed that there can be positional restriction on the installation of optical mice at the bottom of a mobile robot. First, the velocity kinematics of a mobile robot with an array of optical mice is obtained and the resulting Jacobian matrix is analysed symbolically. Second, the isotropic, anisotropic and singular optical mouse placements are identified, along with the corresponding characteristic lengths. Third, the least squares mobile robot velocity estimation from the noisy optical mouse velocity measurements is discussed. Finally, simulation results for several different placements of three optical mice are given.
Study of open systems with molecules in isotropic liquids
Kondo, Yasushi; Matsuzaki, Masayuki
2018-05-01
We are interested in dynamics of a system in an environment, or an open system. Such phenomena as crossover from Markovian to non-Markovian relaxation and thermal equilibration are of our interest. Open systems have experimentally been studied with ultra cold atoms, ions in traps, optics, and cold electric circuits because well-isolated systems can be prepared here and thus the effects of environments can be controlled. We point out that some molecules solved in isotropic liquid are well isolated and thus they can also be employed for studying open systems in Nuclear Magnetic Resonance (NMR) experiments. First, we provide a short review on related phenomena of open systems that helps readers to understand our motivation. We, then, present two experiments as examples of our approach with molecules in isotropic liquids. Crossover from Markovian to non-Markovian relaxation was realized in one NMR experiment, while relaxation-like phenomena were observed in approximately isolated systems in the other.
Self-confinement of finite dust clusters in isotropic plasmas.
Miloshevsky, G V; Hassanein, A
2012-05-01
Finite two-dimensional dust clusters are systems of a small number of charged grains. The self-confinement of dust clusters in isotropic plasmas is studied using the particle-in-cell method. The energetically favorable configurations of grains in plasma are found that are due to the kinetic effects of plasma ions and electrons. The self-confinement phenomenon is attributed to the change in the plasma composition within a dust cluster resulting in grain attraction mediated by plasma ions. This is a self-consistent state of a dust cluster in which grain's repulsion is compensated by the reduced charge and floating potential on grains, overlapped ion clouds, and depleted electrons within a cluster. The common potential well is formed trapping dust clusters in the confined state. These results provide both valuable insights and a different perspective to the classical view on the formation of boundary-free dust clusters in isotropic plasmas.
International Nuclear Information System (INIS)
Sanchez, Richard.
1975-11-01
The Integral Transform Method for the neutron transport equation has been developed in last years by Asaoka and others. The method uses Fourier transform techniques in solving isotropic one-dimensional transport problems in homogeneous media. The method has been extended to linearly anisotropic transport in one-dimensional homogeneous media. Series expansions were also obtained using Hembd techniques for the new anisotropic matrix elements in cylindrical geometry. Carlvik spatial-spherical harmonics method was generalized to solve the same problem. By applying a relation between the isotropic and anisotropic one-dimensional kernels, it was demonstrated that anisotropic matrix elements can be calculated by a linear combination of a few isotropic matrix elements. This means in practice that the anisotropic problem of order N with the N+2 isotropic matrix for the plane and spherical geometries, and N+1 isotropic matrix for cylindrical geometries can be solved. A method of solving linearly anisotropic one-dimensional transport problems in homogeneous media was defined by applying Mika and Stankiewicz observations: isotropic matrix elements were computed by Hembd series and anisotropic matrix elements then calculated from recursive relations. The method has been applied to albedo and critical problems in cylindrical geometries. Finally, a number of results were computed with 12-digit accuracy for use as benchmarks [fr
Isotropic gates and large gamma detector arrays versus angular distributions
International Nuclear Information System (INIS)
Iacob, V.E.; Duchene, G.
1997-01-01
Angular information extracted from in-beam γ ray measurements are of great importance for γ ray multipolarity and nuclear spin assignments. In our days large Ge detector arrays became available allowing the measurements of extremely weak γ rays in almost 4π sr solid angle (e.g., EUROGAM detector array). Given the high detector efficiency it is common for the mean suppressed coincidence multiplicity to reach values as high as 4 to 6. Thus, it is possible to gate on particular γ rays in order to enhance the relative statistics of a definite reaction channel and/or a definite decaying path in the level scheme of the selected residual nucleus. As compared to angular correlations, the conditioned angular distribution spectra exhibit larger statistics because in the latter the gate-setting γ ray may be observed by all the detectors in the array, relaxing somehow the geometrical restrictions of the angular correlations. Since the in-beam γ ray emission is anisotropic one could inquire that gate setting as mentioned above, based on anisotropic γ ray which would perturb the angular distributions in the unfolded events. As our work proved, there is no reason to worry about this if the energy gate runs over the whole solid angle in an ideal 4π sr detector, i.e., if the gate is isotropic. In real quasi 4π sr detector arrays the corresponding quasi isotropic gate preserves the angular properties of the unfolded data, too. However extraction of precise angular distribution coefficient especially a 4 , requires the consideration of the deviation of the quasi isotropic gate relative to the (ideal) isotropic gate
Liquid crystalline states of surfactant solutions of isotropic micelles
International Nuclear Information System (INIS)
Bagdassarian, C.; Gelbart, W.M.; Ben-Shaul, A.
1988-01-01
We consider micellar solutions whose surfactant molecules prefer strongly to form small, globular aggregates in the absence of intermicellar interactions. At sufficiently high volume fraction of surfactant, the isotropic phase of essentially spherical micelles is shown to be unstable with respect to an orientationally ordered (nematic) state of rodlike aggregates. This behavior is relevant to the phase diagrams reported for important classes of aqueous amphiphilic solutions
Monopole-fermion systems in the complex isotropic tetrad formalism
International Nuclear Information System (INIS)
Gal'tsov, D.V.; Ershov, A.A.
1988-01-01
The interaction of fermions of arbitrary isospin with regular magnetic monopoles and dyons of the group SU(2) and also with point gravitating monopoles and dyons of the Wu-Yang type described by the Reissner-Nordstrom metric are studied using the Newman-Penrose complex isotropic tetrad formalism. Formulas for the bound-state spectrum and explicit expressions for the zero modes are obtained and the Rubakov-Callan effect for black holes is discussed
Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila
2018-05-07
Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.
Wave functions, evolution equations and evolution kernels form light-ray operators of QCD
International Nuclear Information System (INIS)
Mueller, D.; Robaschik, D.; Geyer, B.; Dittes, F.M.; Horejsi, J.
1994-01-01
The widely used nonperturbative wave functions and distribution functions of QCD are determined as matrix elements of light-ray operators. These operators appear as large momentum limit of non-local hardron operators or as summed up local operators in light-cone expansions. Nonforward one-particle matrix elements of such operators lead to new distribution amplitudes describing both hadrons simultaneously. These distribution functions depend besides other variables on two scaling variables. They are applied for the description of exclusive virtual Compton scattering in the Bjorken region near forward direction and the two meson production process. The evolution equations for these distribution amplitudes are derived on the basis of the renormalization group equation of the considered operators. This includes that also the evolution kernels follow from the anomalous dimensions of these operators. Relations between different evolution kernels (especially the Altarelli-Parisi and the Brodsky-Lepage kernels) are derived and explicitly checked for the existing two-loop calculations of QCD. Technical basis of these resluts are support and analytically properties of the anomalous dimensions of light-ray operators obtained with the help of the α-representation of Green's functions. (orig.)
The Isotropic Radio Background and Annihilating Dark Matter
Energy Technology Data Exchange (ETDEWEB)
Hooper, Dan [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Belikov, Alexander V. [Institut d' Astrophysique (France); Jeltema, Tesla E. [Univ. of California, Santa Cruz, CA (United States); Linden, Tim [Univ. of California, Santa Cruz, CA (United States); Profumo, Stefano [Univ. of California, Santa Cruz, CA (United States); Slatyer, Tracy R. [Princeton Univ., Princeton, NJ (United States)
2012-11-01
Observations by ARCADE-2 and other telescopes sensitive to low frequency radiation have revealed the presence of an isotropic radio background with a hard spectral index. The intensity of this observed background is found to exceed the flux predicted from astrophysical sources by a factor of approximately 5-6. In this article, we consider the possibility that annihilating dark matter particles provide the primary contribution to the observed isotropic radio background through the emission of synchrotron radiation from electron and positron annihilation products. For reasonable estimates of the magnetic fields present in clusters and galaxies, we find that dark matter could potentially account for the observed radio excess, but only if it annihilates mostly to electrons and/or muons, and only if it possesses a mass in the range of approximately 5-50 GeV. For such models, the annihilation cross section required to normalize the synchrotron signal to the observed excess is sigma v ~ (0.4-30) x 10^-26 cm^3/s, similar to the value predicted for a simple thermal relic (sigma v ~ 3 x 10^-26 cm^3/s). We find that in any scenario in which dark matter annihilations are responsible for the observed excess radio emission, a significant fraction of the isotropic gamma ray background observed by Fermi must result from dark matter as well.
Superfluid H3e in globally isotropic random media
Ikeda, Ryusuke; Aoyama, Kazushi
2009-02-01
Recent theoretical and experimental studies of superfluid H3e in aerogels with a global anisotropy created, e.g., by an external stress have definitely shown that the A -like phase with an equal-spin pairing in such aerogel samples is in the Anderson-Brinkman-Morel (ABM) (or axial) pairing state. In this paper, the A -like phase of superfluid H3e in globally isotropic aerogel is studied in detail by assuming a weakly disordered system in which singular topological defects are absent. Through calculation of the free energy, a disordered ABM state is found to be the best candidate of the pairing state of the globally isotropic A -like phase. Further, it is found through a one-loop renormalization-group calculation that the coreless continuous vortices (or vortex-Skyrmions) are irrelevant to the long-distance behavior of disorder-induced textures, and that the superfluidity is maintained in spite of lack of the conventional superfluid long-range order. Therefore, the globally isotropic A -like phase at weak disorder is, like in the case with a globally stretched anisotropy, a glass phase with the ABM pairing and shows superfluidity.
Isotropic transmission of magnon spin information without a magnetic field.
Haldar, Arabinda; Tian, Chang; Adeyeye, Adekunle Olusola
2017-07-01
Spin-wave devices (SWD), which use collective excitations of electronic spins as a carrier of information, are rapidly emerging as potential candidates for post-semiconductor non-charge-based technology. Isotropic in-plane propagating coherent spin waves (magnons), which require magnetization to be out of plane, is desirable in an SWD. However, because of lack of availability of low-damping perpendicular magnetic material, a usually well-known in-plane ferrimagnet yttrium iron garnet (YIG) is used with a large out-of-plane bias magnetic field, which tends to hinder the benefits of isotropic spin waves. We experimentally demonstrate an SWD that eliminates the requirement of external magnetic field to obtain perpendicular magnetization in an otherwise in-plane ferromagnet, Ni 80 Fe 20 or permalloy (Py), a typical choice for spin-wave microconduits. Perpendicular anisotropy in Py, as established by magnetic hysteresis measurements, was induced by the exchange-coupled Co/Pd multilayer. Isotropic propagation of magnon spin information has been experimentally shown in microconduits with three channels patterned at arbitrary angles.
Depth migration in transversely isotropic media with explicit operators
Energy Technology Data Exchange (ETDEWEB)
Uzcategui, Omar [Colorado School of Mines, Golden, CO (United States)
1994-12-01
The author presents and analyzes three approaches to calculating explicit two-dimensional (2D) depth-extrapolation filters for all propagation modes (P, SV, and SH) in transversely isotropic media with vertical and tilted axis of symmetry. These extrapolation filters are used to do 2D poststack depth migration, and also, just as for isotropic media, these 2D filters are used in the McClellan transformation to do poststack 3D depth migration. Furthermore, the same explicit filters can also be used to do depth-extrapolation of prestack data. The explicit filters are derived by generalizations of three different approaches: the modified Taylor series, least-squares, and minimax methods initially developed for isotropic media. The examples here show that the least-squares and minimax methods produce filters with accurate extrapolation (measured in the ability to position steep reflectors) for a wider range of propagation angles than that obtained using the modified Taylor series method. However, for low propagation angles, the modified Taylor series method has smaller amplitude and phase errors than those produced by the least-squares and minimax methods. These results suggest that to get accurate amplitude estimation, modified Taylor series filters would be somewhat preferred in areas with low dips. In areas with larger dips, the least-squares and minimax methods would give a distinctly better delineation of the subsurface structures.
Lattice Boltzmann model for three-dimensional decaying homogeneous isotropic turbulence
International Nuclear Information System (INIS)
Xu Hui; Tao Wenquan; Zhang Yan
2009-01-01
We implement a lattice Boltzmann method (LBM) for decaying homogeneous isotropic turbulence based on an analogous Galerkin filter and focus on the fundamental statistical isotropic property. This regularized method is constructed based on orthogonal Hermite polynomial space. For decaying homogeneous isotropic turbulence, this regularized method can simulate the isotropic property very well. Numerical studies demonstrate that the novel regularized LBM is a promising approximation of turbulent fluid flows, which paves the way for coupling various turbulent models with LBM
Kernel based eigenvalue-decomposition methods for analysing ham
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming
2010-01-01
methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...
Four-particle scattering with three-particle interactions
International Nuclear Information System (INIS)
Adhikari, S.K.
1979-01-01
The four-particle scattering formalism proposed independently by Alessandrini, by Mitra et al., by Rosenberg, and by Takahashi and Mishima is extended to include a possible three-particle interaction. The kernel of the new equations we get contain both two- and three-body connected parts and gets four-body connected after one iteration. On the other hand, the kernel of the original equations in the absence of three-particle interactions does not have a two-body connected part. We also write scattering equations for the transition operators connecting the two-body fragmentation channels. They are generalization of the Sloan equations in the presence of three-particle interactions. We indicate how to include approximately the effect of a weak three-particle interaction in a practical four-particle scattering calculation
Low energy neutron scattering for energy dependent cross sections. General considerations
Energy Technology Data Exchange (ETDEWEB)
Rothenstein, W; Dagan, R [Technion-Israel Inst. of Tech., Haifa (Israel). Dept. of Mechanical Engineering
1996-12-01
We consider in this paper some aspects related to neutron scattering at low energies by nuclei which are subject to thermal agitation. The scattering is determined by a temperature dependent joint scattering kernel, or the corresponding joint probability density, which is a function of two variables, the neutron energy after scattering, and the cosine of the angle of scattering, for a specified energy and direction of motion of the neutron, before the interaction takes place. This joint probability density is easy to calculate, when the nucleus which causes the scattering of the neutron is at rest. It can be expressed by a delta function, since there is a one to one correspondence between the neutron energy change, and the cosine of the scattering angle. If the thermal motion of the target nucleus is taken into account, the calculation is rather more complicated. The delta function relation between the cosine of the angle of scattering and the neutron energy change is now averaged over the spectrum of velocities of the target nucleus, and becomes a joint kernel depending on both these variables. This function has a simple form, if the target nucleus behaves as an ideal gas, which has a scattering cross section independent of energy. An energy dependent scattering cross section complicates the treatment further. An analytic expression is no longer obtained for the ideal gas temperature dependent joint scattering kernel as a function of the neutron energy after the interaction and the cosine of the scattering angle. Instead the kernel is expressed by an inverse Fourier Transform of a complex integrand, which is averaged over the velocity spectrum of the target nucleus. (Abstract Truncated)
International Nuclear Information System (INIS)
Stirling, W.G.; Perry, S.C.
1996-01-01
We outline the theoretical and experimental background to neutron scattering studies of critical phenomena at magnetic and structural phase transitions. The displacive phase transition of SrTiO 3 is discussed, along with examples from recent work on magnetic materials from the rare-earth (Ho, Dy) and actinide (NpAs, NpSb, USb) classes. The impact of synchrotron X-ray scattering is discussed in conclusion. (author) 13 figs., 18 refs
Classification of maize kernels using NIR hyperspectral imaging
DEFF Research Database (Denmark)
Williams, Paul; Kucheryavskiy, Sergey V.
2016-01-01
NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....
Embedded real-time operating system micro kernel design
Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng
2005-12-01
Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.
An SVM model with hybrid kernels for hydrological time series
Wang, C.; Wang, H.; Zhao, X.; Xie, Q.
2017-12-01
Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.
Influence of wheat kernel physical properties on the pulverizing process.
Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula
2014-10-01
The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.
Dose point kernels for beta-emitting radioisotopes
International Nuclear Information System (INIS)
Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.
1986-01-01
Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables
Hadamard Kernel SVM with applications for breast cancer outcome predictions.
Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong
2017-12-21
Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.
Parameter optimization in the regularized kernel minimum noise fraction transformation
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2012-01-01
Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....
Analysis of Advanced Fuel Kernel Technology
International Nuclear Information System (INIS)
Oh, Seung Chul; Jeong, Kyung Chai; Kim, Yeon Ku; Kim, Young Min; Kim, Woong Ki; Lee, Young Woo; Cho, Moon Sung
2010-03-01
The reference fuel for prismatic reactor concepts is based on use of an LEU UCO TRISO fissile particle. This fuel form was selected in the early 1980s for large high-temperature gas-cooled reactor (HTGR) concepts using LEU, and the selection was reconfirmed for modular designs in the mid-1980s. Limited existing irradiation data on LEU UCO TRISO fuel indicate the need for a substantial improvement in performance with regard to in-pile gaseous fission product release. Existing accident testing data on LEU UCO TRISO fuel are extremely limited, but it is generally expected that performance would be similar to that of LEU UO 2 TRISO fuel if performance under irradiation were successfully improved. Initial HTGR fuel technology was based on carbide fuel forms. In the early 1980s, as HTGR technology was transitioning from high-enriched uranium (HEU) fuel to LEU fuel. An initial effort focused on LEU prismatic design for large HTGRs resulted in the selection of UCO kernels for the fissile particles and thorium oxide (ThO 2 ) for the fertile particles. The primary reason for selection of the UCO kernel over UO 2 was reduced CO pressure, allowing higher burnup for equivalent coating thicknesses and reduced potential for kernel migration, an important failure mechanism in earlier fuels. A subsequent assessment in the mid-1980s considering modular HTGR concepts again reached agreement on UCO for the fissile particle for a prismatic design. In the early 1990s, plant cost-reduction studies led to a decision to change the fertile material from thorium to natural uranium, primarily because of a lower long-term decay heat level for the natural uranium fissile particles. Ongoing economic optimization in combination with anticipated capabilities of the UCO particles resulted in peak fissile particle burnup projection of 26% FIMA in steam cycle and gas turbine concepts
Perruisseau-Carrier, A; Bahlouli, N; Bierry, G; Vernet, P; Facca, S; Liverneaux, P
2017-12-01
Augmented reality could help the identification of nerve structures in brachial plexus surgery. The goal of this study was to determine which law of mechanical behavior was more adapted by comparing the results of Hooke's isotropic linear elastic law to those of Ogden's isotropic hyperelastic law, applied to a biomechanical model of the brachial plexus. A model of finite elements was created using the ABAQUS ® from a 3D model of the brachial plexus acquired by segmentation and meshing of MRI images at 0°, 45° and 135° of shoulder abduction of a healthy subject. The offset between the reconstructed model and the deformed model was evaluated quantitatively by the Hausdorff distance and qualitatively by the identification of 3 anatomical landmarks. In every case the Hausdorff distance was shorter with Ogden's law compared to Hooke's law. On a qualitative aspect, the model deformed by Ogden's law followed the concavity of the reconstructed model whereas the model deformed by Hooke's law remained convex. In conclusion, the results of this study demonstrate that the behavior of Ogden's isotropic hyperelastic mechanical model was more adapted to the modeling of the deformations of the brachial plexus. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
New developments in the Csub(N) method
International Nuclear Information System (INIS)
Grandjean, Paul; Kavenoky, Alain.
1975-01-01
The most recent developments of the Csub(N) method used for solving transport equations are presented: treatment of the Rayleigh scattering kernel in plane geometry and of the cylindrical problems with an isotropic scattering law [fr
Learning Rotation for Kernel Correlation Filter
Hamdi, Abdullah
2017-08-11
Kernel Correlation Filters have shown a very promising scheme for visual tracking in terms of speed and accuracy on several benchmarks. However it suffers from problems that affect its performance like occlusion, rotation and scale change. This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate rotation from one frame to another and enhance the detection of KCF. Hence it gains boost in overall accuracy in many of OBT50 detest videos with minimal additional computation.
Research of Performance Linux Kernel File Systems
Directory of Open Access Journals (Sweden)
Andrey Vladimirovich Ostroukh
2015-10-01
Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.
Fixed kernel regression for voltammogram feature extraction
International Nuclear Information System (INIS)
Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N
2009-01-01
Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals
Reciprocity relation for multichannel coupling kernels
International Nuclear Information System (INIS)
Cotanch, S.R.; Satchler, G.R.
1981-01-01
Assuming time-reversal invariance of the many-body Hamiltonian, it is proven that the kernels in a general coupled-channels formulation are symmetric, to within a specified spin-dependent phase, under the interchange of channel labels and coordinates. The theorem is valid for both Hermitian and suitably chosen non-Hermitian Hamiltonians which contain complex effective interactions. While of direct practical consequence for nuclear rearrangement reactions, the reciprocity relation is also appropriate for other areas of physics which involve coupled-channels analysis
Wheat kernel dimensions: how do they contribute to kernel weight at ...
Indian Academy of Sciences (India)
2011-12-02
Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.
DEFF Research Database (Denmark)
Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.
2013-01-01
correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...
Numerical modelling of multiple scattering between two elastical particles
DEFF Research Database (Denmark)
Bjørnø, Irina; Jensen, Leif Bjørnø
1998-01-01
in suspension have been studied extensively since Foldy's formulation of his theory for isotropic scattering by randomly distributed scatterers. However, a number of important problems related to multiple scattering are still far from finding their solutions. A particular, but still unsolved, problem......Multiple acoustical signal interactions with sediment particles in the vicinity of the seabed may significantly change the course of sediment concentration profiles determined by inversion from acoustical backscattering measurements. The scattering properties of high concentrations of sediments...... is the question of proximity thresholds for influence of multiple scattering in terms of particle properties like volume fraction, average distance between particles or other related parameters. A few available experimental data indicate a significance of multiple scattering in suspensions where the concentration...
Resonance scattering of Rayleigh waves by a mass defect
International Nuclear Information System (INIS)
Croitoru, M.; Grecu, D.
1978-06-01
The resonance scattering of an incident Rayleigh wave by a mass defect extending over a small cylindrical region situated in the surface of a semi-infinite isotropic, elastic medium is investigated by means of the Green's function method. The form of the differential cross-section for the scattering into different channels exhibits a strong resonance phenomenon at two frequencies. The expression of the resonance frequencies as well as of the corresponding widths depends on the relative change in mass density. The main assumption that the wavelengths of incoming and scattered wave are large compared to the defect dimension implies a large relative mass-density change. (author)
Intra-connected three-dimensionally isotropic bulk negative index photonic metamaterial
International Nuclear Information System (INIS)
Guney, Durdu; Koschny, Thomas; Soukoulis, Costas
2010-01-01
Isotropic negative index metamaterials (NIMs) are highly desired, particularly for the realization of ultra-high resolution lenses. However, existing isotropic NIMs function only two-dimensionally and cannot be miniaturized beyond microwaves. Direct laser writing processes can be a paradigm shift toward the fabrication of three-dimensionally (3D) isotropic bulk optical metamaterials, but only at the expense of an additional design constraint, namely connectivity. Here, we demonstrate with a proof-of-principle design that the requirement connectivity does not preclude fully isotropic left-handed behavior. This is an important step towards the realization of bulk 3D isotropic NIMs at optical wavelengths.
International Nuclear Information System (INIS)
Markovic, M. I.; Radunovic, J. B.
1976-01-01
Determination of spatial distribution of neutron flux in water, most frequently used moderator in thermal reactors, demands microscopic scattering kernels dependence on cosine of thermal neutrons scattering angle when solving the Boltzmann equation. Since spatial orientation of water molecules influences this dependence it is necessary to perform orientation averaging or rotation-vibrational intermediate scattering function for water molecules. The calculations described in this paper and the obtained results showed that methods of orientation averaging do not influence the anisotropy of thermal neutrons scattering on water molecules, but do influence the inelastic scattering
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
Han, Quan-Fu; Liu, Yue-Lin; Zhang, Ying; Ding, Fang; Lu, Guang-Hong
2018-04-01
The solubility and bubble formation of hydrogen (H) in tungsten (W) are crucial factors for the application of W as a plasma-facing component under a fusion environment, but the data and mechanism are presently scattered, indicating some important factors might be neglected. High-energy neutron-irradiated W inevitably causes a local strain, which may change the solubility of H in W. Here, we performed first-principles calculations to predict the H solution behaviors under isotropic strain combined with temperature effect in W and found that the H solubility in interstitial lattice can be promoted/impeded by isotropic tensile/compressive strain over the temperature range 300-1800 K. The calculated H solubility presents good agreement with the experiment. Together, our previous results of anisotropic strain, except for isotropic compression, both isotropic tension and anisotropic tension/compression enhance H solution so as to reveal an important physical implication for H accumulation and bubble formation in W: strain can enhance H solubility, resulting in the preliminary nucleation of H bubble that further causes the local strain of W lattice around H bubble, which in turn improves the H solubility at the strained region that promotes continuous growth of the H bubble via a chain-reaction effect in W. This result can also interpret the H bubble formation even if no radiation damage is produced in W exposed to low-energy H plasma.
Time-dependent scattering in resonance lines
International Nuclear Information System (INIS)
Kunasz, P.B.
1983-01-01
A numerical finite-difference method is presented for the problem of time-dependent line transfer in a finite slab in which material density is sufficiently low that the time of flight between scatterings greatly exceeds the relaxation time of the upper state of the scattering transition. The medium is assumed to scatter photons isotropically, with complete frequency redistribution. Numerical solutions are presented for a homogeneous, time-independent slab illuminated by an externally imposed radiation field which enters the slab at t = 0. Graphical results illustrate relaxation to steady state of trapped internal radiation, emergent energy, and emergent profiles. A review of the literature is also given in which the time-dependent line transfer problem is discussed in the context of recent analytical work
The Kernel Estimation in Biosystems Engineering
Directory of Open Access Journals (Sweden)
Esperanza Ayuga Téllez
2008-04-01
Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.
Consistent Valuation across Curves Using Pricing Kernels
Directory of Open Access Journals (Sweden)
Andrea Macrina
2018-03-01
Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.
Aligning Biomolecular Networks Using Modular Graph Kernels
Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant
Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.
Formal truncations of connected kernel equations
International Nuclear Information System (INIS)
Dixon, R.M.
1977-01-01
The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems
Scientific Computing Kernels on the Cell Processor
Energy Technology Data Exchange (ETDEWEB)
Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine
2007-04-04
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.
Delimiting areas of endemism through kernel interpolation.
Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Delimiting areas of endemism through kernel interpolation.
Directory of Open Access Journals (Sweden)
Ubirajara Oliveira
Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
International Nuclear Information System (INIS)
Marleau, G.; Debos, E.
1998-01-01
One of the main problems encountered in cell calculations is that of spatial homogenization where one associates to an heterogeneous cell an homogeneous set of cross sections. The homogenization process is in fact trivial when a totally reflected cell without leakage is fully homogenized since it involved only a flux-volume weighting of the isotropic cross sections. When anisotropic leakages models are considered, in addition to homogenizing isotropic cross sections, the anisotropic scattering cross section must also be considered. The simple option, which consists of using the same homogenization procedure for both the isotropic and anisotropic components of the scattering cross section, leads to inconsistencies between the homogeneous and homogenized transport equation. Here we will present a method for homogenizing the anisotropic scattering cross sections that will resolve these inconsistencies. (author)
On a hierarchical construction of the anisotropic LTSN solution from the isotropic LTSN solution
International Nuclear Information System (INIS)
Foletto, Taline; Segatto, Cynthia F.; Bodmann, Bardo E.; Vilhena, Marco T.
2015-01-01
In this work, we present a recursive scheme targeting the hierarchical construction of anisotropic LTS N solution from the isotropic LTS N solution. The main idea relies in the decomposition of the associated LTS N anisotropic matrix as a sum of two matrices in which one matrix contains the isotropic and the other anisotropic part of the problem. The matrix containing the anisotropic part is considered as the source of the isotropic problem. The solution of this problem is made by the decomposition of the angular flux as a truncated series of intermediate functions and replace in the isotropic equation. After the replacement of these into the split isotropic equation, we construct a set of isotropic recursive problems, that are readily solved by the classic LTS N isotropic method. We apply this methodology to solve problems considering homogeneous and heterogeneous anisotropic regions. Numerical results are presented and compared with the classical LTS N anisotropic solution. (author)
Scanning anisotropy parameters in horizontal transversely isotropic media
Masmoudi, Nabil
2016-10-12
The horizontal transversely isotropic model, with arbitrary symmetry axis orientation, is the simplest effective representative that explains the azimuthal behaviour of seismic data. Estimating the anisotropy parameters of this model is important in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous elliptic background model (which might be obtained from velocity analysis and well velocities). This is accomplished through a Taylor\\'s series expansion of the travel-time solution (of the eikonal equation) as a function of parameter η and azimuth angle ϕ. The accuracy of the travel time expansion is enhanced by the use of Shanks transform. This results in an accurate approximation of the solution of the non-linear eikonal equation and provides a mechanism to scan simultaneously for the best fitting effective parameters η and ϕ, without the need for repetitive modelling of travel times. The analysis of the travel time sensitivity to parameters η and ϕ reveals that travel times are more sensitive to η than to the symmetry axis azimuth ϕ. Thus, η is better constrained from travel times than the azimuth. Moreover, the two-parameter scan in the homogeneous case shows that errors in the background model affect the estimation of η and ϕ differently. While a gradual increase in errors in the background model leads to increasing errors in η, inaccuracies in ϕ, on the other hand, depend on the background model errors. We also propose a layer-stripping method valid for a stack of arbitrary oriented symmetry axis horizontal transversely isotropic layers to convert the effective parameters to the interval layer values.
International Nuclear Information System (INIS)
Botto, D.J.; Pratt, R.H.
1979-05-01
The current status of Compton scattering, both experimental observations and the theoretical predictions, is examined. Classes of experiments are distinguished and the results obtained are summarized. The validity of the incoherent scattering function approximation and the impulse approximation is discussed. These simple theoretical approaches are compared with predictions of the nonrelativistic dipole formula of Gavrila and with the relativistic results of Whittingham. It is noted that the A -2 based approximations fail to predict resonances and an infrared divergence, both of which have been observed. It appears that at present the various available theoretical approaches differ significantly in their predictions and that further and more systematic work is required
Energy Technology Data Exchange (ETDEWEB)
Botto, D.J.; Pratt, R.H.
1979-05-01
The current status of Compton scattering, both experimental observations and the theoretical predictions, is examined. Classes of experiments are distinguished and the results obtained are summarized. The validity of the incoherent scattering function approximation and the impulse approximation is discussed. These simple theoretical approaches are compared with predictions of the nonrelativistic dipole formula of Gavrila and with the relativistic results of Whittingham. It is noted that the A/sup -2/ based approximations fail to predict resonances and an infrared divergence, both of which have been observed. It appears that at present the various available theoretical approaches differ significantly in their predictions and that further and more systematic work is required.
Effects of isotropic alpha populations on tokamak ballooning stability
International Nuclear Information System (INIS)
Spong, D.A.; Sigmar, D.J.; Tsang, K.T.; Ramos, J.J.; Hastings, D.E.; Cooper, W.A.
1986-12-01
Fusion product alpha populations can significantly influence tokamak stability due to coupling between the trapped alpha precessional drift and the kinetic ballooning mode frequency. Careful, quantitative evaluations of these effects are necessary in burning plasma devices such as the Tokamak Fusion Test Reactor and the Joint European Torus, and we have continued systematic development of such a kinetic stability model. In this model we have considered a range of different forms for the alpha distribution function and the tokamak equilibrium. Both Maxwellian and slowing-down models have been used for the alpha energy dependence while deeply trapped and, more recently, isotropic pitch angle dependences have been examined
Anisotropy in "isotropic diffusion" measurements due to nongaussian diffusion
DEFF Research Database (Denmark)
Jespersen, Sune Nørhøj; Olesen, Jonas Lynge; Ianuş, Andrada
2017-01-01
Designing novel diffusion-weighted NMR and MRI pulse sequences aiming to probe tissue microstructure with techniques extending beyond the conventional Stejskal-Tanner family is currently of broad interest. One such technique, multidimensional diffusion MRI, has been recently proposed to afford...... model-free decomposition of diffusion signal kurtosis into terms originating from either ensemble variance of isotropic diffusivity or microscopic diffusion anisotropy. This ability rests on the assumption that diffusion can be described as a sum of multiple Gaussian compartments, but this is often...
Isotropic 2D quadrangle meshing with size and orientation control
Pellenard, Bertrand
2011-12-01
We propose an approach for automatically generating isotropic 2D quadrangle meshes from arbitrary domains with a fine control over sizing and orientation of the elements. At the heart of our algorithm is an optimization procedure that, from a coarse initial tiling of the 2D domain, enforces each of the desirable mesh quality criteria (size, shape, orientation, degree, regularity) one at a time, in an order designed not to undo previous enhancements. Our experiments demonstrate how well our resulting quadrangle meshes conform to a wide range of input sizing and orientation fields.
Observation of transverse patterns in an isotropic microchip laser
International Nuclear Information System (INIS)
Chen, Y.F.; Lan, Y.P.
2003-01-01
An isotropic microchip laser is used to study the characteristics of high-order wave functions in a two-dimensional (2D) quantum harmonic oscillator based on the identical functional forms. With a doughnut pump profile, the spontaneous transverse modes are found to, generally, be elliptic and hyperbolic transverse modes. Theoretical analyses reveal that the elliptic transverse modes are analogous to the coherent states of a 2D harmonic oscillator; the formation of hyperbolic transverse modes is a spontaneous mode locking between two identical Hermite-Gaussian modes
Extracting Feature Model Changes from the Linux Kernel Using FMDiff
Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.
2014-01-01
The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically
Replacement Value of Palm Kernel Meal for Maize on Carcass ...
African Journals Online (AJOL)
This study was conducted to evaluate the effect of replacing maize with palm kernel meal on nutrient composition, fatty acid profile and sensory qualities of the meat of turkeys fed the dietary treatments. Six dietary treatments were formulated using palm kernel meal to replace maize at 0, 20, 40, 60, 80 and 100 percent.
Effect of Palm Kernel Cake Replacement and Enzyme ...
African Journals Online (AJOL)
A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...
Capturing option anomalies with a variance-dependent pricing kernel
Christoffersen, P.; Heston, S.; Jacobs, K.
2013-01-01
We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is
Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan
This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...
Commutators of Integral Operators with Variable Kernels on Hardy ...
Indian Academy of Sciences (India)
Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 4. Commutators of Integral Operators with Variable Kernels on Hardy Spaces. Pu Zhang Kai Zhao. Volume 115 Issue 4 November 2005 pp 399-410 ... Keywords. Singular and fractional integrals; variable kernel; commutator; Hardy space.
Discrete non-parametric kernel estimation for global sensitivity analysis
International Nuclear Information System (INIS)
Senga Kiessé, Tristan; Ventura, Anne
2016-01-01
This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.
Kernel Function Tuning for Single-Layer Neural Networks
Czech Academy of Sciences Publication Activity Database
Vidnerová, Petra; Neruda, Roman
-, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/
Geodesic exponential kernels: When Curvature and Linearity Conflict
DEFF Research Database (Denmark)
Feragen, Aase; Lauze, François; Hauberg, Søren
2015-01-01
manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...
Denoising by semi-supervised kernel PCA preimaging
DEFF Research Database (Denmark)
Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai
2014-01-01
Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...
Design and construction of palm kernel cracking and separation ...
African Journals Online (AJOL)
Design and construction of palm kernel cracking and separation machines. ... Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Design and construction of palm kernel cracking and separation machines. JO Nordiana, K ...
Kernel Methods for Machine Learning with Life Science Applications
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie
Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...
Genetic relationship between plant growth, shoot and kernel sizes in ...
African Journals Online (AJOL)
Maize (Zea mays L.) ear vascular tissue transports nutrients that contribute to grain yield. To assess kernel heritabilities that govern ear development and plant growth, field studies were conducted to determine the combining abilities of parents that differed for kernel-size, grain-filling rates and shoot-size. Thirty two hybrids ...
Boundary singularity of Poisson and harmonic Bergman kernels
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2015-01-01
Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170
Oven-drying reduces ruminal starch degradation in maize kernels
Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.
2014-01-01
The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels
Real time kernel performance monitoring with SystemTap
CERN. Geneva
2018-01-01
SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.
Resolvent kernel for the Kohn Laplacian on Heisenberg groups
Directory of Open Access Journals (Sweden)
Neur Eddine Askour
2002-07-01
Full Text Available We present a formula that relates the Kohn Laplacian on Heisenberg groups and the magnetic Laplacian. Then we obtain the resolvent kernel for the Kohn Laplacian and find its spectral density. We conclude by obtaining the Green kernel for fractional powers of the Kohn Laplacian.
Reproducing Kernels and Coherent States on Julia Sets
Energy Technology Data Exchange (ETDEWEB)
Thirulogasanthar, K., E-mail: santhar@cs.concordia.ca; Krzyzak, A. [Concordia University, Department of Computer Science and Software Engineering (Canada)], E-mail: krzyzak@cs.concordia.ca; Honnouvo, G. [Concordia University, Department of Mathematics and Statistics (Canada)], E-mail: g_honnouvo@yahoo.fr
2007-11-15
We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems.
Reproducing Kernels and Coherent States on Julia Sets
International Nuclear Information System (INIS)
Thirulogasanthar, K.; Krzyzak, A.; Honnouvo, G.
2007-01-01
We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems
A multi-scale kernel bundle for LDDMM
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard
2011-01-01
The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...
Comparison of Kernel Equating and Item Response Theory Equating Methods
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
An analysis of 1-D smoothed particle hydrodynamics kernels
International Nuclear Information System (INIS)
Fulk, D.A.; Quinn, D.W.
1996-01-01
In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs
Optimal Bandwidth Selection in Observed-Score Kernel Equating
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Computing an element in the lexicographic kernel of a game
Faigle, U.; Kern, Walter; Kuipers, Jeroen
The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a
Computing an element in the lexicographic kernel of a game
Faigle, U.; Kern, Walter; Kuipers, J.
2002-01-01
The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a
Covariant meson-baryon scattering with chiral and large Nc constraints
International Nuclear Information System (INIS)
Lutz, M.F.M.; Kolomeitsev, E.E.
2001-05-01
We give a review of recent progress on the application of the relativistic chiral SU(3) Lagrangian to meson-baryon scattering. It is shown that a combined chiral and 1/N c expansion of the Bethe-Salpeter interaction kernel leads to a good description of the kaon-nucleon, antikaon-nucleon and pion-nucleon scattering data typically up to laboratory momenta of p lab ≅ 500 MeV. We solve the covariant coupled channel Bethe-Salpeter equation with the interaction kernel truncated to chiral order Q 3 where we include only those terms which are leading in the large N c limit of QCD. (orig.)
3-D waveform tomography sensitivity kernels for anisotropic media
Djebbi, Ramzi
2014-01-01
The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Open Problem: Kernel methods on manifolds and metric spaces
DEFF Research Database (Denmark)
Feragen, Aasa; Hauberg, Søren
2016-01-01
Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....
Compactly Supported Basis Functions as Support Vector Kernels for Classification.
Wittek, Peter; Tan, Chew Lim
2011-10-01
Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.
Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair
2018-02-01
Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the
Improved modeling of clinical data with kernel methods.
Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart
2012-02-01
Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems
Large Deformation Constitutive Laws for Isotropic Thermoelastic Materials
Energy Technology Data Exchange (ETDEWEB)
Plohr, Bradley J. [Los Alamos National Laboratory; Plohr, Jeeyeon N. [Los Alamos National Laboratory
2012-07-25
We examine the approximations made in using Hooke's law as a constitutive relation for an isotropic thermoelastic material subjected to large deformation by calculating the stress evolution equation from the free energy. For a general thermoelastic material, we employ the volume-preserving part of the deformation gradient to facilitate volumetric/shear strain decompositions of the free energy, its first derivatives (the Cauchy stress and entropy), and its second derivatives (the specific heat, Grueneisen tensor, and elasticity tensor). Specializing to isotropic materials, we calculate these constitutive quantities more explicitly. For deformations with limited shear strain, but possibly large changes in volume, we show that the differential equations for the stress components involve new terms in addition to the traditional Hooke's law terms. These new terms are of the same order in the shear strain as the objective derivative terms needed for frame indifference; unless the latter terms are negligible, the former cannot be neglected. We also demonstrate that accounting for the new terms requires that the deformation gradient be included as a field variable
Isotropic extensions of the vacuum solutions in general relativity
Energy Technology Data Exchange (ETDEWEB)
Molina, C. [Universidade de Sao Paulo (USP), SP (Brazil); Martin-Moruno, Prado [Victoria University of Wellington (New Zealand); Gonzalez-Diaz, Pedro F. [Consejo Superior de Investigaciones Cientificas, Madrid (Spain)
2012-07-01
Full text: Spacetimes described by spherically symmetric solutions of Einstein's equations are of paramount importance both in astrophysical applications and theoretical considerations. And among those, black holes are highlighted. In vacuum, Birkhoff's theorem and its generalizations to non-asymptotically flat cases uniquely fix the metric as the Schwarzschild, Schwarzschild-de Sitter or Schwarzschild-anti-de Sitter geometries, the vacuum solutions of the usual general relativity with zero, positive or negative values for the cosmological constant, respectively. In this work we are mainly interested in black holes in a cosmological environment. Of the two main assumptions of the cosmological principle, homogeneity is lost when compact objects are considered. Nevertheless isotropy is still possible, and we enforce this condition. Within this context, we investigate spatially isotropic solutions close - continuously deformable - to the usual vacuum solutions. We obtain isotropic extensions of the usual spherically symmetric vacuum geometries in general relativity. Exact and perturbative solutions are derived. Maximal extensions are constructed and their causal structures are discussed. The classes of geometries obtained include black holes in compact and non-compact universes, wormholes in the interior region of cosmological horizons, and anti-de Sitter geometries with excess/deficit solid angle. The tools developed here are applicable in more general contexts, with extensions subjected to other constraints. (author)
ISOTROPIC LUMINOSITY INDICATORS IN A COMPLETE AGN SAMPLE
International Nuclear Information System (INIS)
Diamond-Stanic, Aleksandar M.; Rieke, George H.; Rigby, Jane R.
2009-01-01
The [O IV] λ25.89 μm line has been shown to be an accurate indicator of active galactic nucleus (AGN) intrinsic luminosity in that it correlates well with hard (10-200 keV) X-ray emission. We present measurements of [O IV] for 89 Seyfert galaxies from the unbiased revised Shapley-Ames (RSA) sample. The [O IV] luminosity distributions of obscured and unobscured Seyferts are indistinguishable, indicating that their intrinsic AGN luminosities are quite similar and that the RSA sample is well suited for tests of the unified model. In addition, we analyze several commonly used proxies for AGN luminosity, including [O III] λ5007 A, 6 cm radio, and 2-10 keV X-ray emission. We find that the radio luminosity distributions of obscured and unobscured AGNs show no significant difference, indicating that radio luminosity is a useful isotropic luminosity indicator. However, the observed [O III] and 2-10 keV luminosities are systematically smaller for obscured Seyferts, indicating that they are not emitted isotropically.
Asinari, Pietro
2010-10-01
The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both these corrections make possible to derive very accurate reference solutions for this test case. Moreover this work aims to distribute an open-source program (called HOMISBOLTZ), which can be redistributed and/or modified for dealing with different applications, under the terms of the GNU General Public License. The program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1000), but also with regards to the coding style (as simple as possible). Program summaryProgram title: HOMISBOLTZ Catalogue identifier: AEGN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 23 340 No. of bytes in distributed program, including test data, etc.: 7 635 236 Distribution format: tar
A method for manufacturing kernels of metallic oxides and the thus obtained kernels
International Nuclear Information System (INIS)
Lelievre Bernard; Feugier, Andre.
1973-01-01
A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr
Energy Technology Data Exchange (ETDEWEB)
Kotegawa, Hiroshi; Tanaka, Shun-ichi; Sakamoto, Yukio; Nakane, Yoshihiro; Nakashima, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1994-08-01
A comprehensive attenuation data of dose equivalent for point isotropic monoenergetic neutron sources up to 400MeV in infinite shields of water, ordinary concrete and iron has been calculated using the ANISN-JR code and a neutron-photon multigroup macroscopic cross section HIL086R. The attenuation factors were fitted to a 4th order polynomial exponent formula, making possible to use easily for point kernel codes. Additional data in finite shielding geometry was also calculated to correct the effect due to infinite medium, giving the maximum correction of 0.23 in the region for more 400 cm distance from neutron source of 400 MeV in iron shield. Effective attenuation length for monoenergetic neutrons have been studied in detail. Subsequently, it was shown that the attenuation length was strongly dependent upon the penetration length and the Moyer`s formula using a single attenuation length brought large error into the dose estimation behind thick shields for the intermediate energy neutrons up to 400 MeV. Furthermore, it was demonstrated that there was difference more than 50 % in the attenuation length of iron between the calculations with HIL086R and HIL086 because of the self-shielding effect. (author).
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Olmsted, Peter D.; Goldbart, Paul M.
1992-10-01
Macroscopic fluid motion can have dramatic consequences near the isotropic-nematic transition in fluids of nematogens. We explore some of these consequences using both deterministic and stochastic descriptions involving coupled hydrodynamic equations of motion for the nematic order parameter and fluid velocity fields. By analyzing the deterministic equations of motion we identify the locally stable states of homogeneous nematic order and strain rate, thus determining the homogeneous nonequilibrium steady states which the fluid may adopt. By examining inhomogeneous steady states we construct the analog of a first-order phase boundary, i.e., a line in the nonequilibrium phase diagram spanned by temperature and applied stress, at which nonequilibrium states may coexist, and which terminates in a nonequilibrium analog of a critical point. From an analysis of the nematic order-parameter discontinuity across the coexistence line, along with properties of the interface between homogeneous states, we extract the analog of classical equilibrium critical behavior near the nonequilibrium critical point. We develop a theory of fluctuations about biaxial nonequilibrium steady states by augmenting the deterministic description with noise terms, to simulate the effect of thermal fluctuations. We use this description to discuss the scattering of polarized light by order-parameter fluctuations near the nonequilibrium critical point and also in weak shear flow near the equilibrium phase transition. We find that fluids of nematogens near an appropriate temperature and strain rate exhibit the analog of critical opalescence, the intensity of which is sensitive to the polarizations of the incident and scattered light, and to the precise form of the critical mode.
International Nuclear Information System (INIS)
Moll, J; Schulte, R T; Fritzen, C-P; Rezk-Salama, C; Klinkert, T; Kolb, A
2011-01-01
Structural health monitoring systems allow a continuous surveillance of the structural integrity of operational systems. As a result, it is possible to reduce time and costs for maintenance without decreasing the level of safety. In this paper, an integrated simulation and visualization environment is presented that enables a detailed study of Lamb wave propagation in isotropic and anisotropic materials. Thus, valuable information about the nature of Lamb wave propagation and its interaction with structural defects become available. The well-known spectral finite element method is implemented to enable a time-efficient calculation of the wave propagation problem. The results are displayed in an interactive visualization framework accounting for the human perception that is much more sensitive to motion than to changes in color. In addition, measurements have been conducted experimentally to record the full out-of-plane wave-field using a Laser-Doppler vibrometry setup. An aluminum structure with two synthetic cuts has been investigated, where the elongated defects have a different orientation with respect to the piezoelectric actuator. The resulting wave-field is also displayed interactively showing that the scattered wave-field at the defect is highly directional.
Evolution of hadron beams under intrabeam scattering
International Nuclear Information System (INIS)
Wei, Jie.
1993-01-01
Based on assumptions applicable to many circular accelerators, we simplify into analytical form the growth rates of a hadron beam under Coulomb intrabeam scattering (IBS). Because of the dispersion that correlates the horizontal closed orbit to the momentum, the scaling behavior of the growth rates are drastically different at energies low and high compared with the transition energy. At high energies the rates are approximately independent of the energy. Asymptotically, the horizontal and longitudinal beam amplitudes are linearly related by the average dispersion. At low energies, the beam evolves such that the velocity distribution in the rest frame becomes isotropic in all the directions
Stochastic subset selection for learning with kernel machines.
Rhinelander, Jason; Liu, Xiaoping P
2012-06-01
Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.
Multiple kernel boosting framework based on information measure for classification
International Nuclear Information System (INIS)
Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun
2016-01-01
The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Ling-Yu Duan
2010-01-01
Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Tian Yonghong
2010-01-01
Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.
Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong
2014-01-01
Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
Deep Restricted Kernel Machines Using Conjugate Feature Duality.
Suykens, Johan A K
2017-08-01
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.
Training Lp norm multiple kernel learning in the primal.
Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei
2013-10-01
Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Leader, Elliot
1991-01-01
With very few unexplained results to challenge conventional ideas, physicists have to look hard to search for gaps in understanding. An area of physics which offers a lot more than meets the eye is elastic and diffractive scattering where particles either 'bounce' off each other, emerging unscathed, or just graze past, emerging relatively unscathed. The 'Blois' workshops provide a regular focus for this unspectacular, but compelling physics, attracting highly motivated devotees
International Nuclear Information System (INIS)
1991-02-01
The annual report on hand gives an overview of the research work carried out in the Laboratory for Neutron Scattering (LNS) of the ETH Zuerich in 1990. Using the method of neutron scattering, it is possible to examine in detail the static and dynamic properties of the condensed material. In accordance with the multidisciplined character of the method, the LNS has for years maintained a system of intensive co-operation with numerous institutes in the areas of biology, chemistry, solid-state physics, crystallography and materials research. In 1990 over 100 scientists from more than 40 research groups both at home and abroad took part in the experiments. It was again a pleasure to see the number of graduate students present, who were studying for a doctorate and who could be introduced into the neutron scattering during their stay at the LNS and thus were in the position to touch on central ways of looking at a problem in their dissertation using this modern experimental method of solid-state research. In addition to the numerous and interesting ways of formulating the questions to explain the structure, nowadays the scientific programme increasingly includes particularly topical studies in connection with high temperature-supraconductors and materials research
Friedrich, Harald
2016-01-01
This corrected and updated second edition of "Scattering Theory" presents a concise and modern coverage of the subject. In the present treatment, special attention is given to the role played by the long-range behaviour of the projectile-target interaction, and a theory is developed, which is well suited to describe near-threshold bound and continuum states in realistic binary systems such as diatomic molecules or molecular ions. It is motivated by the fact that experimental advances have shifted and broadened the scope of applications where concepts from scattering theory are used, e.g. to the field of ultracold atoms and molecules, which has been experiencing enormous growth in recent years, largely triggered by the successful realization of Bose-Einstein condensates of dilute atomic gases in 1995. The book contains sections on special topics such as near-threshold quantization, quantum reflection, Feshbach resonances and the quantum description of scattering in two dimensions. The level of abstraction is k...
Scaling limit of deeply virtual Compton scattering
Energy Technology Data Exchange (ETDEWEB)
A. Radyushkin
2000-07-01
The author outlines a perturbative QCD approach to the analysis of the deeply virtual Compton scattering process {gamma}{sup *}p {r_arrow} {gamma}p{prime} in the limit of vanishing momentum transfer t=(p{prime}{minus}p){sup 2}. The DVCS amplitude in this limit exhibits a scaling behavior described by a two-argument distributions F(x,y) which specify the fractions of the initial momentum p and the momentum transfer r {equivalent_to} p{prime}{minus}p carried by the constituents of the nucleon. The kernel R(x,y;{xi},{eta}) governing the evolution of the non-forward distributions F(x,y) has a remarkable property: it produces the GLAPD evolution kernel P(x/{xi}) when integrated over y and reduces to the Brodsky-Lepage evolution kernel V(y,{eta}) after the x-integration. This property is used to construct the solution of the one-loop evolution equation for the flavor non-singlet part of the non-forward quark distribution.
A new treatment of nonlocality in scattering process
Upadhyay, N. J.; Bhagwat, A.; Jain, B. K.
2018-01-01
Nonlocality in the scattering potential leads to an integro-differential equation. In this equation nonlocality enters through an integral over the nonlocal potential kernel. The resulting Schrödinger equation is usually handled by approximating r,{r}{\\prime }-dependence of the nonlocal kernel. The present work proposes a novel method to solve the integro-differential equation. The method, using the mean value theorem of integral calculus, converts the nonhomogeneous term to a homogeneous term. The effective local potential in this equation turns out to be energy independent, but has relative angular momentum dependence. This method is accurate and valid for any form of nonlocality. As illustrative examples, the total and differential cross sections for neutron scattering off 12C, 56Fe and 100Mo nuclei are calculated with this method in the low energy region (up to 10 MeV) and are found to be in reasonable accord with the experiments.
Directory of Open Access Journals (Sweden)
Yoonhee Lee
2016-06-01
Full Text Available As a type of accident-tolerant fuel, fully ceramic microencapsulated (FCM fuel was proposed after the Fukushima accident in Japan. The FCM fuel consists of tristructural isotropic particles randomly dispersed in a silicon carbide (SiC matrix. For a fuel element with such high heterogeneity, we have proposed a two-temperature homogenized model using the particle transport Monte Carlo method for the heat conduction problem. This model distinguishes between fuel-kernel and SiC matrix temperatures. Moreover, the obtained temperature profiles are more realistic than those of other models. In Part I of the paper, homogenized parameters for the FCM fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure are obtained by (1 matching steady-state analytic solutions of the model with the results of particle transport Monte Carlo method for heat conduction problems, and (2 preserving total enthalpies in fuel kernels and SiC matrix. The homogenized parameters have two desirable properties: (1 they are insensitive to boundary conditions such as coolant bulk temperatures and thickness of cladding, and (2 they are independent of operating power density. By performing the Monte Carlo calculations with the temperature-dependent thermal properties of the constituent materials of the FCM fuel, temperature-dependent homogenized parameters are obtained.
International Nuclear Information System (INIS)
Lee, Yoon Hee; Cho, Bum Hee; Cho, Nam Zin
2016-01-01
In Part I of this paper, the two-temperature homogenized model for the fully ceramic microencapsulated fuel, in which tristructural isotropic particles are randomly dispersed in a fine lattice stochastic structure, was discussed. In this model, the fuel-kernel and silicon carbide matrix temperatures are distinguished. Moreover, the obtained temperature profiles are more realistic than those obtained using other models. Using the temperature-dependent thermal conductivities of uranium nitride and the silicon carbide matrix, temperature-dependent homogenized parameters were obtained. In Part II of the paper, coupled with the COREDAX code, a reactor core loaded by fully ceramic microencapsulated fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure is analyzed via a two-temperature homogenized model at steady and transient states. The results are compared with those from harmonic- and volumetric-average thermal conductivity models; i.e., we compare keff eigenvalues, power distributions, and temperature profiles in the hottest single channel at a steady state. At transient states, we compare total power, average energy deposition, and maximum temperatures in the hottest single channel obtained by the different thermal analysis models. The different thermal analysis models and the availability of fuel-kernel temperatures in the two-temperature homogenized model for Doppler temperature feedback lead to significant differences