WorldWideScience

Sample records for slowing-down kernels

  1. Pulsar slow-down epochs

    International Nuclear Information System (INIS)

    Heintzmann, H.; Novello, M.

    1981-01-01

    The relative importance of magnetospheric currents and low frequency waves for pulsar braking is assessed and a model is developed which tries to account for the available pulsar timing data under the unifying aspect that all pulsars have equal masses and magnetic moments and are born as rapid rotators. Four epochs of slow-down are distinguished which are dominated by different braking mechanisms. According to the model no direct relationship exists between 'slow-down age' and true age of a pulsar and leads to a pulsar birth-rate of one event per hundred years. (Author) [pt

  2. A slowing-down problem

    Energy Technology Data Exchange (ETDEWEB)

    Carlvik, I; Pershagen, B

    1958-06-15

    An infinitely long circular cylinder of radius a is surrounded by an infinite moderator. Both media are non-capturing. The cylinder emits neutrons of age zero with a constant source density of S. We assume that the ratios of the slowing-down powers and of the diffusion constants are independent of the neutron energy. The slowing-down density is calculated for two cases, a) when the slowing-down power of the cylinder medium is very small, and b) when the cylinder medium is identical with the moderator. The ratios of the slowing-down density at the age {tau} and the source density in the two cases are called {psi}{sub V}, and {psi}{sub M} respectively. {psi}{sub V} and {psi}{sub M} are functions of y=a{sup 2}/4{tau}. These two functions ({psi}{sub V} and {psi}{sub M}) are calculated and tabulated for y = 0-0.25.

  3. Slowing down bubbles with sound

    Science.gov (United States)

    Poulain, Cedric; Dangla, Remie; Guinard, Marion

    2009-11-01

    We present experimental evidence that a bubble moving in a fluid in which a well-chosen acoustic noise is superimposed can be significantly slowed down even for moderate acoustic pressure. Through mean velocity measurements, we show that a condition for this effect to occur is for the acoustic noise spectrum to match or overlap the bubble's fundamental resonant mode. We render the bubble's oscillations and translational movements using high speed video. We show that radial oscillations (Rayleigh-Plesset type) have no effect on the mean velocity, while above a critical pressure, a parametric type instability (Faraday waves) is triggered and gives rise to nonlinear surface oscillations. We evidence that these surface waves are subharmonic and responsible for the bubble's drag increase. When the acoustic intensity is increased, Faraday modes interact and the strongly nonlinear oscillations behave randomly, leading to a random behavior of the bubble's trajectory and consequently to a higher slow down. Our observations may suggest new strategies for bubbly flow control, or two-phase microfluidic devices. It might also be applicable to other elastic objects, such as globules, cells or vesicles, for medical applications such as elasticity-based sorting.

  4. Tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2007-01-01

    We study how rare events happen in the standard two-node tandem Jackson queue and in a generalization, the socalled slow-down network, see [2]. In the latter model the service rate of the first server depends on the number of jobs in the second queue: the first server slows down if the amount of

  5. Neutron slowing-down time in matter

    Energy Technology Data Exchange (ETDEWEB)

    Chabod, Sebastien P., E-mail: sebastien.chabod@lpsc.in2p3.fr [LPSC, Universite Joseph Fourier Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, 38000 Grenoble (France)

    2012-03-21

    We formulate the neutron slowing-down time through elastic collisions in a homogeneous, non-absorbing, infinite medium. Our approach allows taking into account for the first time the energy dependence of the scattering cross-section as well as the energy and temporal distribution of the source neutron population in the results. Starting from this development, we investigate the specific case of the propagation in matter of a mono-energetic neutron pulse. We then quantify the perturbation on the neutron slowing-down time induced by resonances in the scattering cross-section. We show that a resonance can induce a permanent reduction of the slowing-down time, preceded by two discontinuities: a first one at the resonance peak position and an echo one, appearing later. From this study, we suggest that a temperature increase of the propagating medium in presence of large resonances could modestly accelerate the neutron moderation.

  6. Analysis of the neutron slowing down equation

    International Nuclear Information System (INIS)

    Sengupta, A.; Karnick, H.

    1978-01-01

    The infinite series solution of the elementary neutron slowing down equation is studied using the theory of entire functions of exponential type and nonharmonic Fourier series. It is shown from Muntz--Szasz and Paley--Wiener theorems, that the set of exponentials ]exp(ilambda/sub n/u) ]/sup infinity//sub n/=-infinity, where ]lambda/sub n/]/sup infinity//sub n/=-infinity are the roots of the transcendental equation in slowing down theory, is complete and forms a basis in a lethargy interval epsilon. This distinctive role of the maximum lethargy change per collision is due to the Fredholm character of the slowing down operator which need not be quasinilpotent. The discontinuities in the derivatives of the collision density are examined by treating the slowing down equation in its differential-difference form. The solution (Hilbert) space is the union of a countable number of subspaces L 2 (-epsilon/2, epsilon/2) over each of which the exponential functions are complete

  7. Testing algorithms for critical slowing down

    Directory of Open Access Journals (Sweden)

    Cossu Guido

    2018-01-01

    Full Text Available We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.

  8. Slowing down modernity: A critique : A critique

    OpenAIRE

    Vostal , Filip

    2017-01-01

    International audience; The connection between modernization and social acceleration is now a prominent theme in critical social analysis. Taking a cue from these debates, I explore attempts that aim to 'slow down modernity' by resisting the dynamic tempo of various social processes and experiences. The issue of slowdown now accounts for a largely unquestioned measure, expected to deliver unhasty tempo conditioning good and ethical life, mental well-being and accountable democracy. In princip...

  9. Lead Slowing Down Spectrometer Status Report

    International Nuclear Information System (INIS)

    Warren, Glen A.; Anderson, Kevin K.; Bonebrake, Eric; Casella, Andrew M.; Danon, Yaron; Devlin, M.; Gavron, Victor A.; Haight, R.C.; Imel, G.R.; Kulisek, Jonathan A.; O'Donnell, J.M.; Weltz, Adam

    2012-01-01

    This report documents the progress that has been completed in the first half of FY2012 in the MPACT-funded Lead Slowing Down Spectrometer project. Significant progress has been made on the algorithm development. We have an improve understanding of the experimental responses in LSDS for fuel-related material. The calibration of the ultra-depleted uranium foils was completed, but the results are inconsistent from measurement to measurement. Future work includes developing a conceptual model of an LSDS system to assay plutonium in used fuel, improving agreement between simulations and measurement, design of a thorium fission chamber, and evaluation of additional detector techniques.

  10. MMRW-BOOKS, Legacy books on slowing down, thermalization, particle transport theory, random processes in reactors

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2007-01-01

    Description: Prof. M.M..R Williams has now released three of his legacy books for free distribution: 1 - M.M.R. Williams: The Slowing Down and Thermalization of Neutrons, North-Holland Publishing Company - Amsterdam, 582 pages, 1966. Content: Part I - The Thermal Energy Region: 1. Introduction and Historical Review, 2. The Scattering Kernel, 3. Neutron Thermalization in an Infinite Homogeneous Medium, 4. Neutron Thermalization in Finite Media, 5. The Spatial Dependence of the Energy Spectrum, 6. Reactor Cell Calculations, 7. Synthetic Scattering Kernels. Part II - The Slowing Down Region: 8. Scattering Kernels in the Slowing Down Region, 9. Neutron Slowing Down in an Infinite Homogeneous Medium, 10.Neutron Slowing Down and Diffusion. 2 - M.M.R. Williams: Mathematical Methods in Particle Transport Theory, Butterworths, London, 430 pages, 1971. Content: 1 The General Problem of Particle Transport, 2 The Boltzmann Equation for Gas Atoms and Neutrons, 3 Boundary Conditions, 4 Scattering Kernels, 5 Some Basic Problems in Neutron Transport and Rarefied Gas Dynamics, 6 The Integral Form of the Transport Equation in Plane, Spherical and Cylindrical Geometries, 7 Exact Solutions of Model Problems, 8 Eigenvalue Problems in Transport Theory, 9 Collision Probability Methods, 10 Variational Methods, 11 Polynomial Approximations. 3 - M.M.R. Williams: Random Processes in Nuclear Reactors, Pergamon Press Oxford New York Toronto Sydney, 243 pages, 1974. Content: 1. Historical Survey and General Discussion, 2. Introductory Mathematical Treatment, 3. Applications of the General Theory, 4. Practical Applications of the Probability Distribution, 5. The Langevin Technique, 6. Point Model Power Reactor Noise, 7. The Spatial Variation of Reactor Noise, 8. Random Phenomena in Heterogeneous Reactor Systems, 9. Associated Fluctuation Problems, Appendix: Noise Equivalent Sources. Note to the user: Prof. M.M.R Williams owns the copyright of these books and he authorises the OECD/NEA Data Bank

  11. Theory of neutron slowing down in nuclear reactors

    CERN Document Server

    Ferziger, Joel H; Dunworth, J V

    2013-01-01

    The Theory of Neutron Slowing Down in Nuclear Reactors focuses on one facet of nuclear reactor design: the slowing down (or moderation) of neutrons from the high energies with which they are born in fission to the energies at which they are ultimately absorbed. In conjunction with the study of neutron moderation, calculations of reactor criticality are presented. A mathematical description of the slowing-down process is given, with particular emphasis on the problems encountered in the design of thermal reactors. This volume is comprised of four chapters and begins by considering the problems

  12. Neutron slowing-down time in finite water systems

    International Nuclear Information System (INIS)

    Hirschberg, S.

    1981-11-01

    The influence of the size of a moderator system on the neutron slowing-down time has been investigated. The experimental part of the study was performed on six cubes of water with side lengths from 8 to 30 cm. Neutrons generated in pulses of about 1 ns width were slowed down from 14 MeV to 1.457 eV. The detection method used was based on registration of gamma radiation from the main capture resonance of indium. The most probable slowing-down times were found to be 778 +- 23 ns and 898 +- 25 ns for the smallest and for the largest cubes, respectively. The corresponding mean slowing-down times were 1205 +- 42 ns and 1311 +- 42 ns. In a separate measurement series the space dependence of the slowing-down time close to the source was studied. These experiments were supplemented by a theoretical calculation which gave an indication of the space dependence of the slowingdown time in finite systems. The experimental results were compared to the slowing-down times obtained from various theoretical approaches and from Monte Carlo calculations. All the methods show a decrease of the slowing-down time with decreasing size of the moderator. This effect was least pronounced in the experimental results, which can be explained by the fact the measurements are spatially dependent. The agreement between the Monte Carlo results and those obtained using the diffusion approximation or the age-diffusion theory is surprisingly good, especially for large systems. The P1 approximation, on the other hand, leads to an overestimation of the effect of the finite size on the slowing-down time. (author)

  13. Experimental studies of heavy-ion slowing down in matter

    International Nuclear Information System (INIS)

    Geissel, H.; Weick, H.; Scheidenberger, C.; Bimbot, R.; Gardes, D.

    2002-08-01

    Measurements of heavy-ion slowing down in matter differ in many aspects from experiments with light particles like protons and α-particles. An overview of the special experimental requirements, methods, data analysis and interpretation is presented for heavy-ion stopping powers, energy- and angular-straggling and ranges in the energy domain from keV/u up to GeV/u. Characteristic experimental results are presented and compared with theory and semiempirical predictions. New applications are outlined, which represent a challenge to continuously improve the knowledge of heavy-ion slowing down. (orig.)

  14. Pulsed neutron method for diffusion, slowing down, and reactivity measurements

    International Nuclear Information System (INIS)

    Sjoestrand, N.G.

    1985-01-01

    An outline is given on the principles of the pulsed neutron method for the determination of thermal neutron diffusion parameters, for slowing-down time measurements, and for reactivity determinations. The historical development is sketched from the breakthrough in the middle of the nineteen fifties and the usefulness and limitations of the method are discussed. The importance for the present understanding of neutron slowing-down, thermalization and diffusion are point out. Examples are given of its recent use for e.g. absorption cross section measurements and for the study of the properties of heterogeneous systems

  15. Solution of neutron slowing down equation including multiple inelastic scattering

    International Nuclear Information System (INIS)

    El-Wakil, S.A.; Saad, A.E.

    1977-01-01

    The present work is devoted the presentation of an analytical method for the calculation of elastically and inelastically slowed down neutrons in an infinite non absorbing homogeneous medium. On the basis of the Central limit theory (CLT) and the integral transform technique the slowing down equation including inelastic scattering in terms of the Green function of elastic scattering is solved. The Green function is decomposed according to the number of collisions. A formula for the flux at any lethargy O (u) after any number of collisions is derived. An equation for the asymptotic flux is also obtained

  16. Investigating the critical slowing down of QCD simulations

    International Nuclear Information System (INIS)

    Schaefer, Stefan

    2009-12-01

    Simulations of QCD are known to suffer from serious critical slowing down towards the continuum limit. This is particularly prominent in the topological charge. We investigate the severeness of the problem in the range of lattice spacings used in contemporary simulations and propose a method to give more reliable error estimates. (orig.)

  17. Continuous neutron slowing down theory applied to resonances

    International Nuclear Information System (INIS)

    Segev, M.

    1977-01-01

    Neutronic formalisms that discretize the neutron slowing down equations in large numerical intervals currently account for the bulk effect of resonances in a given interval by the narrow resonance approximation (NRA). The NRA reduces the original problem to an efficient numerical formalism through two assumptions: resonance narrowness with respect to the scattering bands in the slowing down equations and resonance narrowness with respect to the numerical intervals. Resonances at low energies are narrow neither with respect to the slowing down ranges nor with respect to the numerical intervals, which are usually of a fixed lethargy width. Thus, there are resonances to which the NRA is not applicable. To stay away from the NRA, the continuous slowing down (CSD) theory of Stacey was invoked. The theory is based on a linear expansion in lethargy of the collision density in integrals of the slowing down equations and had notable success in various problems. Applying CSD theory to the assessment of bulk resonance effects raises the problem of obtaining efficient quadratures for integrals involved in the definition of the so-called ''moderating parameter.'' The problem was solved by two approximations: (a) the integrals were simplified through a rationale, such that the correct integrals were reproduced for very narrow or very wide resonances, and (b) the temperature-broadened resonant line shapes were replaced by nonbroadened line shapes to enable analytical integration. The replacement was made in such a way that the integrated capture and scattering probabilities in each resonance were preserved. The resulting formalism is more accurate than the narrow-resonance formalisms and is equally as efficient

  18. An ultra-fine group slowing down benchmark

    International Nuclear Information System (INIS)

    Ganapol, B. D.; Maldonado, G. I.; Williams, M. L.

    2009-01-01

    We suggest a new solution to the neutron slowing down equation in terms of multi-energy panels. Our motivation is to establish a computational benchmark featuring an ultra-fine group calculation, where the number of groups could be on the order of 100,000. While the CENTRM code of the SCALE code package has been shown to adequately treat this many groups, there is always a need for additional verification. The multi panel solution principle is simply to consider the slowing down region as sub regions of panels, with each panel a manageable number of groups, say 100. In this way, we reduce the enormity of dealing with the entire spectrum all at once by considering many smaller problems. We demonstrate the solution in the unresolved U3o8 resonance region. (authors)

  19. Critical slowing down and error analysis in lattice QCD simulations

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Stefan [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Sommer, Rainer; Virotta, Francesco [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2010-09-15

    We study the critical slowing down towards the continuum limit of lattice QCD simulations with Hybrid Monte Carlo type algorithms. In particular for the squared topological charge we find it to be very severe with an effective dynamical critical exponent of about 5 in pure gauge theory. We also consider Wilson loops which we can demonstrate to decouple from the modes which slow down the topological charge. Quenched observables are studied and a comparison to simulations of full QCD is made. In order to deal with the slow modes in the simulation, we propose a method to incorporate the information from slow observables into the error analysis of physical observables and arrive at safer error estimates. (orig.)

  20. Critical slowing down and error analysis in lattice QCD simulations

    International Nuclear Information System (INIS)

    Schaefer, Stefan; Sommer, Rainer; Virotta, Francesco

    2010-09-01

    We study the critical slowing down towards the continuum limit of lattice QCD simulations with Hybrid Monte Carlo type algorithms. In particular for the squared topological charge we find it to be very severe with an effective dynamical critical exponent of about 5 in pure gauge theory. We also consider Wilson loops which we can demonstrate to decouple from the modes which slow down the topological charge. Quenched observables are studied and a comparison to simulations of full QCD is made. In order to deal with the slow modes in the simulation, we propose a method to incorporate the information from slow observables into the error analysis of physical observables and arrive at safer error estimates. (orig.)

  1. Slowing down of test particles in a plasma (1961)

    International Nuclear Information System (INIS)

    Belayche, P.; Chavy, P.; Dupoux, M.; Salmon, J.

    1961-01-01

    Numerical solution of the Fokker-Planck equation applied to the slowing down of tritons in a deuterium plasma. After the equations and the boundary conditions have been written, some attention is paid to the numerical tricks used to run the problem on a high speed electronic computer. The numerical results thus obtained are then analyzed and as far as possible, mathematically explained. (authors) [fr

  2. Development of lead slowing down spectrometer for isotopic fissile assay

    International Nuclear Information System (INIS)

    Lee, Yong Deok; Park, Chang Je; Ahn, Sang Joon; Kim, Ho Dong

    2014-01-01

    A lead slowing down spectrometer (LSDS) is under development for analysis of isotopic fissile material contents in pyro-processed material, or spent fuel. Many current commercial fissile assay technologies have a limitation in accurate and direct assay of fissile content. However, LSDS is very sensitive in distinguishing fissile fission signals from each isotope. A neutron spectrum analysis was conducted in the spectrometer and the energy resolution was investigated from 0.1eV to 100keV. The spectrum was well shaped in the slowing down energy. The resolution was enough to obtain each fissile from 0.2eV to 1keV. The detector existence in the lead will disturb the source neutron spectrum. It causes a change in resolution and peak amplitude. The intense source neutron production was designed for ∼E12 n's/sec to overcome spent fuel background. The detection sensitivity of U238 and Th232 fission chamber was investigated. The first and second layer detectors increase detection efficiency. Thorium also has a threshold property to detect the fast fission neutrons from fissile fission. However, the detection of Th232 is about 76% of that of U238. A linear detection model was set up over the slowing down neutron energy to obtain each fissile material content. The isotopic fissile assay using LSDS is applicable for the optimum design of spent fuel storage to maximize burnup credit and quality assurance of the recycled nuclear material for safety and economics. LSDS technology will contribute to the transparency and credibility of pyro-process using spent fuel, as internationally demanded.

  3. Topology and slowing down of high energy ion orbits

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, L G [Commission of the European Communities, Abingdon (United Kingdom). JET Joint Undertaking; Porcelli, F [Politecnico di Torino, Turin (Italy); Berk, H L [Texas Univ., Austin, TX (United States). Inst. for Fusion Studies

    1994-07-01

    An analysis of nonstandard guiding centre orbits is presented, which is relevant to MeV ions in a Tokamak. The orbit equation has been simplified from the start, allowing to present an analytic classification of the possible orbits. The topological transitions of the orbits during collisional slowing down are described. In particular, the characteristic equations reveal the existence of a single fixed point in the relevant phase plane, and the presence of a bifurcation curve corresponding to the locus of the pinch orbits. A significant particle inward pinch has been discovered. (authors). 7 figs.

  4. An Exact Solution of The Neutron Slowing Down Equation

    Energy Technology Data Exchange (ETDEWEB)

    Stefanovic, D [Boris Kidric Vinca Institute of Nuclear Sciences, Vinca, Belgrade (Yugoslavia)

    1970-07-01

    The slowing down equation for an infinite homogeneous monoatomic medium is solved exactly. The cross sections depend on neutron energy. The solution is given in analytical form within each of the lethargy intervals. This analytical form is the sum of probabilities which are given by the Green functions. The calculated collision density is compared with the one obtained by Bednarz and also with an approximate Wigner formula for the case of a resonance not wider than one collision interval. For the special case of hydrogen, the present solution reduces to Bethe's solution. (author)

  5. Energy spectra from coupled electron-photon slowing down

    International Nuclear Information System (INIS)

    Beck, H.L.

    1976-08-01

    A coupled electron-photon slowing down calculation for determining electron and photon track length in uniform homogeneous media is described. The method also provides fluxes for uniformly distributed isotropic sources. Source energies ranging from 10 keV to over 10 GeV are allowed and all major interactions are treated. The calculational technique and related cross sections are described in detail and sample calculations are discussed. A listing of the Fortran IV computer code used for the calculations is also included. 4 tables, 7 figures, 16 references

  6. Physical condition for the slowing down of cosmic acceleration

    Directory of Open Access Journals (Sweden)

    Ming-Jian Zhang

    2018-04-01

    Full Text Available The possible slowing down of cosmic acceleration was widely studied. However, judgment on this effect in different dark energy parameterizations was very ambiguous. Moreover, the reason of generating these uncertainties was still unknown. In the present paper, we analyze the derivative of deceleration parameter q′(z using the Gaussian processes. This model-independent reconstruction suggests that no slowing down of acceleration is presented within 95% C.L. from the Union2.1 and JLA supernova data. However, q′(z from the observational H(z data is a little smaller than zero at 95% C.L., which indicates that future H(z data may have a potential to test this effect. From the evolution of q′(z, we present an interesting constraint on the dark energy and observational data. The physical constraint clearly solves the problem of why some dark energy models cannot produce this effect in previous work. Comparison between the constraint and observational data also shows that most of current data are not in the allowed regions. This implies a reason of why current data cannot convincingly measure this effect.

  7. Physical condition for the slowing down of cosmic acceleration

    Science.gov (United States)

    Zhang, Ming-Jian; Xia, Jun-Qing

    2018-04-01

    The possible slowing down of cosmic acceleration was widely studied. However, judgment on this effect in different dark energy parameterizations was very ambiguous. Moreover, the reason of generating these uncertainties was still unknown. In the present paper, we analyze the derivative of deceleration parameter q‧ (z) using the Gaussian processes. This model-independent reconstruction suggests that no slowing down of acceleration is presented within 95% C.L. from the Union2.1 and JLA supernova data. However, q‧ (z) from the observational H (z) data is a little smaller than zero at 95% C.L., which indicates that future H (z) data may have a potential to test this effect. From the evolution of q‧ (z), we present an interesting constraint on the dark energy and observational data. The physical constraint clearly solves the problem of why some dark energy models cannot produce this effect in previous work. Comparison between the constraint and observational data also shows that most of current data are not in the allowed regions. This implies a reason of why current data cannot convincingly measure this effect.

  8. On the neutron slowing-down in moderators

    Energy Technology Data Exchange (ETDEWEB)

    Caldeira, Alexandre D., E-mail: alexdc@ieav.cta.br [Instituto de Estudos Avançados (IEAV), São José dos Campos, SP (Brazil). Divisão de Energia Nuclear

    2017-07-01

    Neutron slowing-down is a very important subject to be considered in several areas of nuclear energy application, such as thermal nuclear reactors, nuclear medicine, radiological protection, detectors design and so on. Moderator materials are the responsible to perform this task and among the neutron induced cross sections, the elastic scattering cross section is the main nuclear interaction in this case. At thermal neutron energies, the moderator molecular or crystalline structure become important and dependent on the moderator phase, gas, liquid, or solid, its cross sections and, consequently, the angular and energy distributions of the scattered neutron are affected. The procedures used for generating correctly moderators cross sections at thermal neutron energies from evaluated nuclear data files utilizing the NJOY system are addressed. (author)

  9. Slowing down of relativistic heavy ions and new applications

    International Nuclear Information System (INIS)

    Geissel, H.; Scheidenberger, C.

    1997-10-01

    New precision experiments using powerful accelerator facilities and high-resolution spectrometers have contributed to a better understanding of the atomic and nuclear interactions of relativistic heavy ions with matter. Experimental results on stopping power and energy-loss straggling of bare heavy projectiles demonstrate large systematic deviations from theories based on first order perturbation. The energy-loss straggling is more than a factor of two enhanced for the heaviest projectiles compared to the relativistic Bohr formula. The interaction of cooled relativistic heavy ions with crystals opens up new fields for basic research and applications, i. e., for the first time resonant coherent excitations of both atomic and nuclear levels can be measured at the first harmonic. The spatial monoisotopic separation of exotic nuclei with in-flight separators and the tumor therapy with heavy ions are new applications based on a precise knowledge of slowing down. (orig.)

  10. Slowing down of alpha particles in ICF DT plasmas

    Science.gov (United States)

    He, Bin; Wang, Zhi-Gang; Wang, Jian-Guo

    2018-01-01

    With the effects of the projectile recoil and plasma polarization considered, the slowing down of 3.54 MeV alpha particles is studied in inertial confinement fusion DT plasmas within the plasma density range from 1024 to 1026 cm-3 and the temperature range from 100 eV to 200 keV. It includes the rate of the energy change and range of the projectile, and the partition fraction of its energy deposition to the deuteron and triton. The comparison with other models is made and the reason for their difference is explored. It is found that the plasmas will not be heated by the alpha particle in its slowing down the process once the projectile energy becomes close to or less than the temperature of the electron or the deuteron and triton in the plasmas. This leads to less energy deposition to the deuteron and triton than that if the recoil of the projectile is neglected when the temperature is close to or higher than 100 keV. Our model is found to be able to provide relevant, reliable data in the large range of the density and temperature mentioned above, even if the density is around 1026 cm-3 while the deuteron and triton temperature is below 500 eV. Meanwhile, the two important models [Phys. Rev. 126, 1 (1962) and Phys. Rev. E 86, 016406 (2012)] are found not to work in this case. Some unreliable data are found in the last model, which include the range of alpha particles and the electron-ion energy partition fraction when the electron is much hotter than the deuteron and triton in the plasmas.

  11. LEAD SLOWING DOWN SPECTROSCOPY FOR DIRECT Pu MASS MEASUREMENTS

    International Nuclear Information System (INIS)

    Ressler, Jennifer J.; Smith, Leon E.; Anderson, Kevin K.

    2008-01-01

    The direct measurement of Pu in previously irradiated fuel assemblies is a recognized need in the international safeguards community. A suitable technology could support more timely and independent material control and accounting (MC and A) measurements at nuclear fuel storage areas, the head-end of reprocessing facilities, and at the product-end of recycled fuel fabrication. Lead slowing down spectroscopy (LSDS) may be a viable solution for directly measuring not only the mass of 239Pu in fuel assemblies, but also the masses of other fissile isotopes such as 235U and 241Pu. To assess the potential viability of LSDS, an LSDS spectrometer was modeled in MCNP5 and 'virtual assays' of nominal PWR assemblies ranging from 0 to 60 GWd/MTU burnup were completed. Signal extraction methods, including the incorporation of nonlinear fitting to account for self-shielding effects in strong resonance regions, are described. Quantitative estimates of Pu uncertainty are given for simplistic and more realistic fuel isotopic inventories calculated using ORIGEN. A discussion of additional signal-perturbing effects that will be addressed in future work, and potential signal extraction approaches that could improve Pu mass uncertainties, are also discussed

  12. Critical slowing down and error analysis in lattice QCD simulations

    International Nuclear Information System (INIS)

    Virotta, Francesco

    2012-01-01

    In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as τ exp (a)∝a -5 , where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10)τ exp . This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N f =2 simulations using the Kaon decay constant f K as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

  13. Detector Response to Neutrons Slowed Down in Media Containing Cadmium

    Energy Technology Data Exchange (ETDEWEB)

    Broda, E.

    1943-07-01

    This report was written by E. Broda, H. Hereward and L. Kowarski at the Cavendish Laboratory (Cambridge) in September 1943 and is about the detector response to neutrons slowed down in media containing cadmium. The following measurement description and the corresponding results can be found in this report: B, Mn, In, I, Dy and Ir detectors were activated, with and without a Cd shield, near the source in a vessel containing 7 litres of water or solutions of CdSO{sub 4} ranging between 0.1 and 2.8 mols per litre. Numerical data on observed activities are discussed in two different ways and the following conclusions can be drawn: The capture cross-section of dysprosium decreases quicker than 1/v and this discrepancy becomes noticeable well within the limits of the C-group. This imposes obvious limitations on the use of Dy as a detector of thermal neutrons. Cadmium differences of manganese seem to be a reliable 1/v detector for the whole C-group. Indium and iridium show definite signs of an increase of vσ in the upper regions of the C-group. Deviations shown by iodine are due to the imperfections of the technique rather than to a definite departure from the 1/v law. (nowak)

  14. Exact solutions of the neutron slowing down equation

    International Nuclear Information System (INIS)

    Dawn, T.Y.; Yang, C.N.

    1976-01-01

    The problem of finding the exact analytic closed-form solution for the neutron slowing down equation in an infinite homogeneous medium is studied in some detail. The existence and unique properties of the solution of this equation for both the time-dependent and the time-independent cases are studied. A direct method is used to determine the solution of the stationary problem. The final result is given in terms of a sum of indefinite multiple integrals by which solutions of some special cases and the Placzek-type oscillation are examined. The same method can be applied to the time-dependent problem with the aid of the Laplace transformation technique, but the inverse transform is, in general, laborious. However, the solutions of two special cases are obtained explicitly. Results are compared with previously reported works in a variety of cases. The time moments for the positive integral n are evaluated, and the conditions for the existence of the negative moments are discussed

  15. Critical slowing down and error analysis in lattice QCD simulations

    Energy Technology Data Exchange (ETDEWEB)

    Virotta, Francesco

    2012-02-21

    In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as {tau}{sub exp}(a){proportional_to}a{sup -5}, where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10){tau}{sub exp}. This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N{sub f}=2 simulations using the Kaon decay constant f{sub K} as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

  16. How Accurately Can We Calculate Neutrons Slowing Down In Water ?

    International Nuclear Information System (INIS)

    Cullen, D E; Blomquist, R; Greene, M; Lent, E; MacFarlane, R; McKinley, S; Plechaty, E; Sublet, J C

    2006-01-01

    We have compared the results produced by a variety of currently available Monte Carlo neutron transport codes for the relatively simple problem of a fast source of neutrons slowing down and thermalizing in water. Initial comparisons showed rather large differences in the calculated flux; up to 80% differences. By working together we iterated to improve the results by: (1) insuring that all codes were using the same data, (2) improving the models used by the codes, and (3) correcting errors in the codes; no code is perfect. Even after a number of iterations we still found differences, demonstrating that our Monte Carlo and supporting codes are far from perfect; in particularly we found that the often overlooked nuclear data processing codes can be the weakest link in our systems of codes. The results presented here represent the today's state-of-the-art, in the sense that all of the Monte Carlo codes are modern, widely available and used codes. They all use the most up-to-date nuclear data, and the results are very recent, weeks or at most a few months old; these are the results that current users of these codes should expect to obtain from them. As such, the accuracy and limitations of the codes presented here should serve as guidelines to code users in interpreting their results for similar problems. We avoid crystal ball gazing, in the sense that we limit the scope of this report to what is available to code users today, and we avoid predicting future improvements that may or may not actual come to pass. An exception that we make is in presenting results for an improved thermal scattering model currently being testing using advanced versions of NJOY and MCNP that are not currently available to users, but are planned for release in the not too distant future. The other exception is to show comparisons between experimentally measured water cross sections and preliminary ENDF/B-VII thermal scattering law, S(α,β) data; although these data are strictly preliminary

  17. Systematic dependence on the slowing down environment, of nuclear lifetime measurements by DSAM

    International Nuclear Information System (INIS)

    Toulemonde, M.; Haas, F.

    1976-01-01

    The meanlife of the 22 Ne 3.34MeV level measured by DSAM (Doppler Shift Attenuation Method) at an average velocity of 0.009 c, shows large fluctuations with different slowing down materials ranging from Li to Pb. These fluctuations are correlated with a linear dependence of the 'apparent' meanlife tau on the electronic slowing down time

  18. Slowing down of 100 keV antiprotons in Al foils

    Directory of Open Access Journals (Sweden)

    K. Nordlund

    2018-03-01

    Full Text Available Using energy degrading foils to slow down antiprotons is of interest for producing antihydrogen atoms. I consider here the slowing down of 100 keV antiprotons, that will be produced in the ELENA storage ring under construction at CERN, to energies below 10 keV. At these low energies, they are suitable for efficient antihydrogen production. I simulate the antihydrogen motion and slowing down in Al foils using a recently developed molecular dynamics approach. The results show that the optimal Al foil thickness for slowing down the antiprotons to below 5 keV is 910 nm, and to below 10 keV is 840 nm. Also the lateral spreading of the transmitted antiprotons is reported and the uncertainties discussed. Keywords: Antiprotons, Stopping power, Slowing down, Molecular dynamics

  19. Contribution to analytical solution of neutron slowing down problem in homogeneous and heterogeneous media

    International Nuclear Information System (INIS)

    Stefanovic, D.B.

    1970-12-01

    The objective of this work is to describe the new analytical solution of the neutron slowing down equation for infinite monoatomic media with arbitrary energy dependence of cross section. The solution is obtained by introducing Green slowing down functions instead of starting from slowing down equations directly. The previously used methods for calculation of fission neutron spectra in the reactor cell were numerical. The proposed analytical method was used for calculating the space-energy distribution of fast neutrons and number of neutron reactions in a thermal reactor cell. The role of analytical method in solving the neutron slowing down in reactor physics is to enable understating of the slowing down process and neutron transport. The obtained results could be used as standards for testing the accuracy od approximative and practical methods

  20. The energy deposition of slowing down particles in heterogeneous media

    International Nuclear Information System (INIS)

    Prinja, A.K.; Williams, M.M.R.

    1980-01-01

    Energy deposition by atomic particles in adjacent semi-infinite, amorphous media is described using the forward form of the Boltzmann transport equation. A transport approximation to the scattering kernel, developed elsewhere, incorporating realistic energy transfer is employed to assess the validity of the commonly used isotropic-scattering and straight-ahead approximations. Results are presented for integral energy deposition rates due to a plane, isotropic and monoenergetic source in one half-space for a range of mass ratios between 0.1 and 5.0. Integral profiles for infinite and semi-infinite media are considered and the influence of reflection for different mass ratios is evaluated. The dissimilar scattering properties of the two media induce a discontinuity at the interface in the energy deposition rate the magnitude of which is sensitive to the source position relative to the interface. A comprehensive evaluation of the total energy deposited in the source free medium is presented for a range of mass ratios and source positions. An interesting minimum occurs for off-interface source locations as a function of the source-medium mass ratio, the position of which varies with the source position but is insensitive to the other mass ratio. As a special case, energy reflection and escape coefficients for semi-infinite media are obtained which demonstrates that the effect of a vacuum interface is insignificant for deep source locations except for large mass ratios when reflection becomes dominant. (author)

  1. Analytical calculations of neutron slowing down and transport in the constant-cross-section problem

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1978-01-01

    Some aspects of the problem of neutron slowing down and transport in an infinite medium consisting of a single nuclide that scatters elastically and isotropically and has energy-independent cross sections were investigated. The method of singular eigenfunctions was applied to the Boltzmann equation governing the Laplace transform (with respect to the lethargy variable) of the neutron flux. A new sufficient condition for the convergence of the coefficients of the expansion of the scattering kernel in Legendre polynomials was rigorously derived for this energy-dependent problem. Formulas were obtained for the lethargy-dependent spatial moments of the scalar flux that are valid for medium to large lethargies. In deriving these formulas, use was made of the well-known connection between the spatial moments of the Laplace-transformed scalar flux and the moments of the flux in the ''eigenvalue space.'' The calculations were greatly aided by the construction of a closed general expression for these ''eigenvalue space'' moments. Extensive use was also made of the methods of combinatorial analysis and of computer evaluation, via FORMAC, of complicated sequences of manipulations. For the case of no absorption it was possible to obtain for materials of any atomic weight explicit corrections to the age-theory formulas for the spatial moments M/sub 2n/(u) of the scalar flux that are valid through terms of the order of u -5 . The evaluation of the coefficients of the powers of n, as explicit functions of the nuclear mass, is one of the end products of this investigation. In addition, an exact expression for the second spatial moment, M 2 (u), valid for arbitrary (constant) absorption, was derived. It is now possible to calculate analytically and rigorously the ''age'' for the constant-cross-section problem for arbitrary (constant) absorption and nuclear mass. 5 figures, 1 table

  2. Analytical calculations of neutron slowing down and transport in the constant-cross-section problem

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1978-04-01

    Aspects of the problem of neutron slowing down and transport in an infinite medium consisting of a single nuclide that scatters elastically and isotropically and has energy-independent cross sections were investigated. The method of singular eigenfunctions was applied to the Boltzmann Equation governing the Laplace transform (with respect to the lethargy variable) of the neutron flux. A new sufficient condition for the convergence of the coefficients of the expansion of the scattering kernel in Legendre polynomials was rigorously derived for this energy-dependent problem. Formulas were obtained for the lethargy-dependent spatial moments of the scalar flux that are valid for medium to large lethargies. Use was made of the well-known connection between the spatial moments of the Laplace-transformed scalar flux and the moments of the flux in the ''eigenvalue space.'' The calculations were aided by the construction of a closed general expression for these ''eigenvalue space'' moments. Extensive use was also made of the methods of combinatorial analysis and of computer evaluation of complicated sequences of manipulations. For the case of no absorption it was possible to obtain for materials of any atomic weight explicit corrections to the age-theory formulas for the spatial moments M/sub 2n/(u) of the scalar flux that are valid through terms of the order of u -5 . The evaluation of the coefficients of the powers of n, as explicit functions of the nuclear mass, represent one of the end products of this investigation. In addition, an exact expression for the second spatial moment, M 2 (u), valid for arbitrary (constant) absorption, was derived. It is now possible to calculate analytically and rigorously the ''age'' for the constant-cross-section problem for arbitrary (constant) absorption and nuclear mass. 5 figures, 1 table

  3. Analytical calculations of neutron slowing down and transport in the constant-cross-section problem

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D.G.

    1978-04-01

    Aspects of the problem of neutron slowing down and transport in an infinite medium consisting of a single nuclide that scatters elastically and isotropically and has energy-independent cross sections were investigated. The method of singular eigenfunctions was applied to the Boltzmann Equation governing the Laplace transform (with respect to the lethargy variable) of the neutron flux. A new sufficient condition for the convergence of the coefficients of the expansion of the scattering kernel in Legendre polynomials was rigorously derived for this energy-dependent problem. Formulas were obtained for the lethargy-dependent spatial moments of the scalar flux that are valid for medium to large lethargies. Use was made of the well-known connection between the spatial moments of the Laplace-transformed scalar flux and the moments of the flux in the ''eigenvalue space.'' The calculations were aided by the construction of a closed general expression for these ''eigenvalue space'' moments. Extensive use was also made of the methods of combinatorial analysis and of computer evaluation of complicated sequences of manipulations. For the case of no absorption it was possible to obtain for materials of any atomic weight explicit corrections to the age-theory formulas for the spatial moments M/sub 2n/(u) of the scalar flux that are valid through terms of the order of u/sup -5/. The evaluation of the coefficients of the powers of n, as explicit functions of the nuclear mass, represent one of the end products of this investigation. In addition, an exact expression for the second spatial moment, M/sub 2/(u), valid for arbitrary (constant) absorption, was derived. It is now possible to calculate analytically and rigorously the ''age'' for the constant-cross-section problem for arbitrary (constant) absorption and nuclear mass. 5 figures, 1 table.

  4. Slowing down of 100 keV antiprotons in Al foils

    Science.gov (United States)

    Nordlund, K.

    2018-03-01

    Using energy degrading foils to slow down antiprotons is of interest for producing antihydrogen atoms. I consider here the slowing down of 100 keV antiprotons, that will be produced in the ELENA storage ring under construction at CERN, to energies below 10 keV. At these low energies, they are suitable for efficient antihydrogen production. I simulate the antihydrogen motion and slowing down in Al foils using a recently developed molecular dynamics approach. The results show that the optimal Al foil thickness for slowing down the antiprotons to below 5 keV is 910 nm, and to below 10 keV is 840 nm. Also the lateral spreading of the transmitted antiprotons is reported and the uncertainties discussed.

  5. Light slow-down in semiconductor waveguides due to population pulsations

    DEFF Research Database (Denmark)

    Mørk, Jesper; Kjær, Rasmus; Poel, Mike van der

    2005-01-01

    This study theoretically analyzes the prospect of inducing light-slow down in a semiconductor waveguide based on coherent population oscillation. Experimental observations of the effect are also presented....

  6. PN solutions for the slowing-down and the cell calculation problems in plane geometry

    International Nuclear Information System (INIS)

    Caldeira, Alexandre David

    1999-01-01

    In this work P N solutions for the slowing-down and cell problems in slab geometry are developed. To highlight the main contributions of this development, one can mention: the new particular solution developed for the P N method applied to the slowing-down problem in the multigroup model, originating a new class of polynomials denominated Chandrasekhar generalized polynomials; the treatment of a specific situation, known as a degeneracy, arising from a particularity in the group constants and the first application of the P N method, for arbitrary N, in criticality calculations at the cell level reported in literature. (author)

  7. 7Li neutron-induced elastic scattering cross section measurement using a slowing-down spectrometer

    Directory of Open Access Journals (Sweden)

    Heusch M.

    2010-10-01

    Full Text Available A new integral measurement of the 7Li neutron induced elastic scattering cross section was determined in a wide neutron energy range. The measurement was performed on the LPSC-PEREN experimental facility using a heterogeneous graphite-LiF slowing-down time spectrometer coupled with an intense pulsed neutron generator (GENEPI-2. This method allows the measurement of the integral elastic scattering cross section in a slowing-down neutron spectrum. A Bayesian approach coupled to Monte Carlo calculations was applied to extract naturalC, 19F and 7Li elastic scattering cross sections.

  8. Simple and efficient importance sampling scheme for a tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2008-01-01

    This paper considers importance sampling as a tool for rare-event simulation. The system at hand is a so-called tandem queue with slow-down, which essentially means that the server of the first queue (or: upstream queue) switches to a lower speed when the second queue (downstream queue) exceeds some

  9. Slowing-down of heavy ions in a fusible D-3He mixture

    International Nuclear Information System (INIS)

    Cocu, Francis; Uzureau, Jose; Lachkar, Jean.

    1982-01-01

    First experimental results connected with the study of the slowing-down of heavy ions ( 16 O, 63 Cu, 109 Ag) at energies of approximately 1 MeV/A in a fusible mixture of D- 3 He indicate that the higher is the projectile mass the greater is the fusion reaction rate [fr

  10. Slowing down and straggling of protons and heavy ions in matter

    International Nuclear Information System (INIS)

    Aernsbergen, L.M. van.

    1986-01-01

    The Doppler Shift Attenuation (DSA) method is widely used to measure lifetimes of nuclear states. However, many of the lifetimes resulting from DSA measurements display large variations which are caused by an insufficient knowledge of slowing down processes of nucleus recoils. The measurement of 'ranges' is an often used method to study these slowing down processes. In this kind of measurement the distributions of implanted ions are determined for example by the method of Rutherford backscattering or from the yield curve of a resonant nuclear reaction. In this thesis, research on energy-loss processes of protons and Si ions in aluminium is presented. The so-called Resonance Shift method has been improved for the measurements on the protons themselves. This method has only been used occasionally before. A new method has been developed, which is called the Transmission Doppler Shift Attenuation (TDSA) method, for the measurement on Si ions. (Auth.)

  11. The Solution of a Velocity-Dependent Slowing-Down Problem Using Case's Eigenfunction Expansion

    Energy Technology Data Exchange (ETDEWEB)

    Claesson, A

    1964-11-15

    The slowing-down of neutrons in a hydrogenous moderator is calculated assuming a plane source of monoenergetic neutrons. The scattering is regarded as spherically symmetric, but it is shown that a generalization to anisotropy is possible. The cross-section is assumed to be constant. The virgin neutrons are separated out, and it follows that the distribution of the remaining neutrons can be obtained by using an expansion in the eigenfunctions given by Case for the velocity-independent problem.

  12. Measurement of the Time Dependence of Neutron Slowing-Down and Therma in Heavy Water

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, E

    1966-03-15

    The behaviour of neutrons during their slowing-down and thermalization in heavy water has been followed on the time scale by measurements of the time-dependent rate of reaction between the flux and the three spectrum indicators indium, cadmium and gadolinium. The space dependence of the reaction rate curves has also been studied. The time-dependent density at 1.46 eV is well reproduced by a function, given by von Dardel, and a time for the maximum density of 7.1 {+-} 0.3 {mu}s has been obtained for this energy in deuterium gas in agreement with the theoretical value of 7.2 {mu}s. The spatial variation of this time is in accord with the calculations by Claesson. The slowing- down time to 0.2 eV has been found to be 16.3 {+-}2.4 {mu}s. The approach to the equilibrium spectrum takes place with a time constant of 33 {+-}4 {mu}s, and the equilibrium has been established after about 200 {mu}s. Comparison of the measured curves for cadmium and gadolinium with multigroup calculations of the time-dependent flux and reaction rate show the superiority of the scattering models for heavy water of Butler and of Brown and St. John over the mass 2 gas model. The experiment has been supplemented with Monte Carlo calculations of the slowing down time.

  13. Numerical studies of fast ion slowing down rates in cool magnetized plasma using LSP

    Science.gov (United States)

    Evans, Eugene S.; Kolmes, Elijah; Cohen, Samuel A.; Rognlien, Tom; Cohen, Bruce; Meier, Eric; Welch, Dale R.

    2016-10-01

    In MFE devices, rapid transport of fusion products from the core into the scrape-off layer (SOL) could perform the dual roles of energy and ash removal. The first-orbit trajectories of most fusion products from small field-reversed configuration (FRC) devices will traverse the SOL, allowing those particles to deposit their energy in the SOL and be exhausted along the open field lines. Thus, the fast ion slowing-down time should affect the energy balance of an FRC reactor and its neutron emissions. However, the dynamics of fast ion energy loss processes under the conditions expected in the FRC SOL (with ρe code, to examine the effects of SOL density and background B-field on the slowing-down time of fast ions in a cool plasma. As we use explicit algorithms, these simulations must spatially resolve both ρe and λDe, as well as temporally resolve both Ωe and ωpe, increasing computation time. Scaling studies of the fast ion charge (Z) and background plasma density are in good agreement with unmagnetized slowing down theory. Notably, Z-scaling represents a viable way to dramatically reduce the required CPU time for each simulation. This work was supported, in part, by DOE Contract Number DE-AC02-09CH11466.

  14. Measurement of the Time Dependence of Neutron Slowing-Down and Therma in Heavy Water

    International Nuclear Information System (INIS)

    Moeller, E.

    1966-03-01

    The behaviour of neutrons during their slowing-down and thermalization in heavy water has been followed on the time scale by measurements of the time-dependent rate of reaction between the flux and the three spectrum indicators indium, cadmium and gadolinium. The space dependence of the reaction rate curves has also been studied. The time-dependent density at 1.46 eV is well reproduced by a function, given by von Dardel, and a time for the maximum density of 7.1 ± 0.3 μs has been obtained for this energy in deuterium gas in agreement with the theoretical value of 7.2 μs. The spatial variation of this time is in accord with the calculations by Claesson. The slowing- down time to 0.2 eV has been found to be 16.3 ±2.4 μs. The approach to the equilibrium spectrum takes place with a time constant of 33 ±4 μs, and the equilibrium has been established after about 200 μs. Comparison of the measured curves for cadmium and gadolinium with multigroup calculations of the time-dependent flux and reaction rate show the superiority of the scattering models for heavy water of Butler and of Brown and St. John over the mass 2 gas model. The experiment has been supplemented with Monte Carlo calculations of the slowing down time

  15. A New Method for Predicting the Penetration and Slowing-Down of Neutrons in Reactor Shields

    International Nuclear Information System (INIS)

    Hjaerne, L.; Leimdoerfer, M.

    1965-05-01

    A new approach is presented in the formulation of removal-diffusion theory. The 'removal cross-section' is redefined and the slowing-down between the multigroup diffusion equations is treated with a complete energy transfer matrix rather than in an age theory approximation. The method, based on the new approach contains an adjustable parameter. Examples of neutron spectra and thermal flux penetrations are given in a number of differing shield configurations and the results compare favorably with experiments and Moments Method calculations

  16. Design optimization of radiation shielding structure for lead slowing-down spectrometer system

    OpenAIRE

    Kim, Jeong Dong; Ahn, Sangjoon; Lee, Yong Deok; Park, Chang Je

    2015-01-01

    A lead slowing-down spectrometer (LSDS) system is a promising nondestructive assay technique that enables a quantitative measurement of the isotopic contents of major fissile isotopes in spent nuclear fuel and its pyroprocessing counterparts, such as 235U, 239Pu, 241Pu, and, potentially, minor actinides. The LSDS system currently under development at the Korea Atomic Energy Research Institute (Daejeon, Korea) is planned to utilize a high-flux (>1012 n/cm2·s) neutron source comprised of a high...

  17. Critical slowing down in driven-dissipative Bose-Hubbard lattices

    Science.gov (United States)

    Vicentini, Filippo; Minganti, Fabrizio; Rota, Riccardo; Orso, Giuliano; Ciuti, Cristiano

    2018-01-01

    We explore theoretically the dynamical properties of a first-order dissipative phase transition in coherently driven Bose-Hubbard systems, describing, e.g., lattices of coupled nonlinear optical cavities. Via stochastic trajectory calculations based on the truncated Wigner approximation, we investigate the dynamical behavior as a function of system size for one-dimensional (1D) and 2D square lattices in the regime where mean-field theory predicts nonlinear bistability. We show that a critical slowing down emerges for increasing number of sites in 2D square lattices, while it is absent in 1D arrays. We characterize the peculiar properties of the collective phases in the critical region.

  18. A New Method for Predicting the Penetration and Slowing-Down of Neutrons in Reactor Shields

    Energy Technology Data Exchange (ETDEWEB)

    Hjaerne, L; Leimdoerfer, M

    1965-05-15

    A new approach is presented in the formulation of removal-diffusion theory. The 'removal cross-section' is redefined and the slowing-down between the multigroup diffusion equations is treated with a complete energy transfer matrix rather than in an age theory approximation. The method, based on the new approach contains an adjustable parameter. Examples of neutron spectra and thermal flux penetrations are given in a number of differing shield configurations and the results compare favorably with experiments and Moments Method calculations.

  19. Measurements with the high flux lead slowing-down spectrometer at LANL

    International Nuclear Information System (INIS)

    Danon, Y.; Romano, C.; Thompson, J.; Watson, T.; Haight, R.C.; Wender, S.A.; Vieira, D.J.; Bond, E.; Wilhelmy, J.B.; O'Donnell, J.M.; Michaudon, A.; Bredeweg, T.A.; Schurman, T.; Rochman, D.; Granier, T.; Ethvignot, T.; Taieb, J.; Becker, J.A.

    2007-01-01

    A Lead Slowing-Down Spectrometer (LSDS) was recently installed at LANL [D. Rochman, R.C. Haight, J.M. O'Donnell, A. Michaudon, S.A. Wender, D.J. Vieira, E.M. Bond, T.A. Bredeweg, A. Kronenberg, J.B. Wilhelmy, T. Ethvignot, T. Granier, M. Petit, Y. Danon, Characteristics of a lead slowing-down spectrometer coupled to the LANSCE accelerator, Nucl. Instr. and Meth. A 550 (2005) 397]. The LSDS is comprised of a cube of pure lead 1.2 m on the side, with a spallation pulsed neutron source in its center. The LSDS is driven by 800 MeV protons with a time-averaged current of up to 1 μA, pulse widths of 0.05-0.25 μs and a repetition rate of 20-40 Hz. Spallation neutrons are created by directing the proton beam into an air-cooled tungsten target in the center of the lead cube. The neutrons slow down by scattering interactions with the lead and thus enable measurements of neutron-induced reaction rates as a function of the slowing-down time, which correlates to neutron energy. The advantage of an LSDS as a neutron spectrometer is that the neutron flux is 3-4 orders of magnitude higher than a standard time-of-flight experiment at the equivalent flight path, 5.6 m. The effective energy range is 0.1 eV to 100 keV with a typical energy resolution of 30% from 1 eV to 10 keV. The average neutron flux between 1 and 10 keV is about 1.7 x 10 9 n/cm 2 /s/μA. This high flux makes the LSDS an important tool for neutron-induced cross section measurements of ultra-small samples (nanograms) or of samples with very low cross sections. The LSDS at LANL was initially built in order to measure the fission cross section of the short-lived metastable isotope of U-235, however it can also be used to measure (n, α) and (n, p) reactions. Fission cross section measurements were made with samples of 235 U, 236 U, 238 U and 239 Pu. The smallest sample measured was 10 ng of 239 Pu. Measurement of (n, α) cross section with 760 ng of Li-6 was also demonstrated. Possible future cross section

  20. Slowing-down of non-relativistic ions in a hot dense plasma

    International Nuclear Information System (INIS)

    Maynard, G.

    1982-01-01

    The parameter γ (action of the free-electrons of the plasma) was investigated: calculation of the mean value of γ for a great number of monokinetic incident ions and of the dispersion about this mean value, using the random phase approximation; and calculation of the dielectric function. The contribution of the plasma ions to the stopping power was studied and the description of the ion-plasma interaction improved. The slowing-down of an ion at large distance by the bound electrons of an atom was calculated. This study is applied to the ion-plasma interaction in the ion-beam inertial confinement [fr

  1. Plasma heating and confinement in toroidal magnetic bottle by means of microwave slowing-down structure

    International Nuclear Information System (INIS)

    Datlov, J.; Klima, R.; Kopecky, V.; Musil, J.; Zacek, F.

    1977-01-01

    An invention is described concerning high-frequency plasma heating and confinement in toroidal magnetic vessels. Microwave energy is applied to the plasma via one or more slowing-down structures exciting low phase velocity waves whose energy may be efficiently absorbed by plasma electrons. The wave momentum transfer results in a toroidal electrical current whose magnetic field together with an external magnetic field ensure plasma confinement. The low-frequency modulation of microwave energy may also be used for heating the ion plasma component. (J.U.)

  2. Geant4-DNA simulation of electron slowing-down spectra in liquid water

    Energy Technology Data Exchange (ETDEWEB)

    Incerti, S., E-mail: sebastien.incerti@tdt.edu.vn [Division of Nuclear Physics, Ton Duc Thang University, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Univ. Bordeaux, CENBG, UMR 5797, F-33170, Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Kyriakou, I. [Medical Physics Laboratory, University of Ioannina Medical School, 45110 Ioannina (Greece); Tran, H.N. [Division of Nuclear Physics, Ton Duc Thang University, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam)

    2017-04-15

    This work presents the simulation of monoenergetic electron slowing-down spectra in liquid water by the Geant4-DNA extension of the Geant4 Monte Carlo toolkit (release 10.2p01). These spectra are simulated for several incident energies using the most recent Geant4-DNA physics models, and they are compared to literature data. The influence of Auger electron production is discussed. For the first time, a dedicated Geant4-DNA example allowing such simulations is described and is provided to Geant4 users, allowing further verification of Geant4-DNA track structure simulation capabilities.

  3. Comparison of analytical transport and stochastic solutions for neutron slowing down in an infinite medium

    International Nuclear Information System (INIS)

    Jahshan, S.N.; Wemple, C.A.; Ganapol, B.D.

    1993-01-01

    A comparison of the numerical solutions of the transport equation describing the steady neutron slowing down in an infinite medium with constant cross sections is made with stochastic solutions obtained from tracking successive neutron histories in the same medium. The transport equation solution is obtained using a numerical Laplace transform inversion algorithm. The basis for the algorithm is an evaluation of the Bromwich integral without analytical continuation. Neither the transport nor the stochastic solution is limited in the number of scattering species allowed. The medium may contain an absorption component as well. (orig.)

  4. Critical slowing down associated with regime shifts in the US housing market

    Science.gov (United States)

    Tan, James Peng Lung; Cheong, Siew Siew Ann

    2014-02-01

    Complex systems are described by a large number of variables with strong and nonlinear interactions. Such systems frequently undergo regime shifts. Combining insights from bifurcation theory in nonlinear dynamics and the theory of critical transitions in statistical physics, we know that critical slowing down and critical fluctuations occur close to such regime shifts. In this paper, we show how universal precursors expected from such critical transitions can be used to forecast regime shifts in the US housing market. In the housing permit, volume of homes sold and percentage of homes sold for gain data, we detected strong early warning signals associated with a sequence of coupled regime shifts, starting from a Subprime Mortgage Loans transition in 2003-2004 and ending with the Subprime Crisis in 2007-2008. Weaker signals of critical slowing down were also detected in the US housing market data during the 1997-1998 Asian Financial Crisis and the 2000-2001 Technology Bubble Crisis. Backed by various macroeconomic data, we propose a scenario whereby hot money flowing back into the US during the Asian Financial Crisis fueled the Technology Bubble. When the Technology Bubble collapsed in 2000-2001, the hot money then flowed into the US housing market, triggering the Subprime Mortgage Loans transition in 2003-2004 and an ensuing sequence of transitions. We showed how this sequence of couple transitions unfolded in space and in time over the whole of US.

  5. Early warning of climate tipping points from critical slowing down: comparing methods to improve robustness

    Science.gov (United States)

    Lenton, T. M.; Livina, V. N.; Dakos, V.; Van Nes, E. H.; Scheffer, M.

    2012-01-01

    We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings. PMID:22291229

  6. Qsub(N) approximation for slowing-down in fast reactors

    International Nuclear Information System (INIS)

    Rocca-Volmerange, Brigitte.

    1976-05-01

    An accurate and simple determination of the neutron energy spectra in fast reactors poses several problems. The slowing-down models (Fermi, Wigner, Goertzel-Greuling...) which are different forms of the approximation with order N=0 may prove inaccurate, in spite of recent improvements. A new method of approximation is presented which turns out to be a method of higher order: the Qsub(N) method. It is characterized by a rapid convergence with respect to the order N, by the use of some global parameters to represent the slowing-down and by the expression of the Boltzmann integral equation in a differential formalism. Numerous test verify that, for the order N=2 or 3, the method gives precision equivalent to that of the multigroup numerical integration for the spectra with greatly reduced calculational effort. Furthermore, since the Qsub(N) expressions are a kind of synthesis method, they allow calculation of the spatial Green's function, or the use of collision probabilities to find the flux. Both possibilities have been introduced into existing reactor codes: EXCALIBUR, TRALOR, RE MINEUR... Some applications to multi-zone media (core, blanket, reflector of Masurca pile and exponential slabs) are presented in the isotropic collision approximation. The case of linearly anisotropic collisions is theoretically resolved [fr

  7. Development for fissile assay in recycled fuel using lead slowing down spectrometer

    International Nuclear Information System (INIS)

    Lee, Yong Deok; Je Park, C.; Kim, Ho-Dong; Song, Kee Chan

    2013-01-01

    A future nuclear energy system is under development to turn spent fuels produced by PWRs into fuels for a SFR (Sodium Fast Reactor) through the pyrochemical process. The knowledge of the isotopic fissile content of the new fuel is very important for fuel safety. A lead slowing down spectrometer (LSDS) is under development to analyze the fissile material content (Pu 239 , Pu 241 and U 235 ) of the fuel. The LSDS requires a neutron source, the neutrons will be slowed down through their passage in a lead medium and will finally enter the fuel and will induce fission reactions that will be analysed and the isotopic content of the fuel will be then determined. The issue is that the spent fuel emits intense gamma rays and neutrons by spontaneous fission. The threshold fission detector screens the prompt fast fission neutrons and as a result the LSDS is not influenced by the high level radiation background. The energy resolution of LSDS is good in the range 0.1 eV to 1 keV. It is also the range in which the fission reaction is the most discriminating for the considered fissile isotopes. An electron accelerator has been chosen to produce neutrons with an adequate target through (e - ,γ)(γ,n) reactions

  8. Measurement of the Slowing-Down and Thermalization Time of Neutrons in Water

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, E [AB Atomenergi, Nykoeping (Sweden); Sjoestrand, N G [Chalmers Univ. of Technology, Goeteborg (Sweden)

    1963-11-15

    The experimental equipment for the study of the time behaviour of neutrons during slowing-down and thermalization in a moderator by the use of a pulsed van de Graaff accelerator as a neutron source is described. Information on the change with time of the neutron spectrum is obtained from its reaction with spectrum indicators, the reaction rate being observed by the detection of capture gamma rays. The time resolution may be chosen in the range 0.01 to 5 {mu}s. Measurements have been made for water with cadmium, gadolinium and samarium as indicators dissolved in the medium. A slowing- down time to 0.2 eV of 2.7 {+-} 0.4 {mu}s and a total thermalization time of 25 - 30 {mu}s were obtained. From 9 {mu}s after the injection, the results are well described by the assumption of the flux as a Maxwell distribution cooling down to the moderator temperature with a thermalization time constant of 4.1 {+-} 0.4 {mu}s.

  9. Ballooning mode instability due to slowed-down ALPHA -particles and associated transport

    International Nuclear Information System (INIS)

    Itoh, Sanae; Itoh, Kimitaka; Tuda, Takashi; Tokuda, Shinji.

    1982-01-01

    The microscopic stability of tokamak plasma, which contains slowed-down alpha-particles and the anomalous fluxes enhanced by the fluctuation, was studied. The local maxwellian distribution with the density inhomogeneity as the equilibrium distribution of electrons, ions and alpha-particles was closen. In the zero-beta limit, two branches of eigenmodes, which are electrostatic, were obtained. The electrostatic ballooning mode became unstable by the grad B drift of particles in the toroidal plasma. It should be noted that there was no critical alpha-particle density and no critical beta-value for the onset of the instability in toroidal plasma even in the presence of the magnetic shear. When the beta-value exceeded the critical beta-value of the MHD ballooning mode, the growth rate approached to that of the MHD mode, and the mode sturcture became very close to that of the MHD mode. The unstable mode in toroidal plasma was the ballooning mode, and was unstable for all plasma parameters. The associated cross-field transport by the ballooning mode is considered. It was found that if the distribution function was assumed to be the birth distribution, the loss rate was very slow and slower than the slowing down time. The effect of alpha-particles on the large scale MHD activity of plasma is discussed. (Kato, T.)

  10. Utilizing the slowing-down-time technique for benchmarking neutron thermalization in graphite

    International Nuclear Information System (INIS)

    Zhou, T.; Hawari, A. I.; Wehring, B. W.

    2007-01-01

    Graphite is the moderator/reflector in the Very High Temperature Reactor (VHTR) concept of Generation IV reactors. As a thermal reactor, the prediction of the thermal neutron spectrum in the VHTR is directly dependent on the accuracy of the thermal neutron scattering libraries of graphite. In recent years, work has been on-going to benchmark and validate neutron thermalization in 'reactor grade' graphite. Monte Carlo simulations using the MCNP5 code were used to design a pulsed neutron slowing-down-time experiment and to investigate neutron slowing down and thermalization in graphite at temperatures relevant to VHTR operation. The unique aspect of this experiment is its ability to observe the behavior of neutrons throughout an energy range extending from the source energy to energies below 0.1 eV. In its current form, the experiment is designed and implemented at the Oak Ridge Electron Linear Accelerator (ORELA). Consequently, ORELA neutron pulses are injected into a 70 cm x 70 cm x 70 cm graphite pile. A furnace system that surrounds the pile and is capable of heating the graphite to a centerline temperature of 1200 K has been designed and built. A system based on U-235 fission chambers and Li-6 scintillation detectors surrounds the pile. This system is coupled to multichannel scaling instrumentation and is designed for the detection of leakage neutrons as a function of the slowing-down-time (i.e., time after the pulse). To ensure the accuracy of the experiment, careful assessment was performed of the impact of background noise (due to room return neutrons) and pulse-to-pulse overlap on the measurement. Therefore, the entire setup is surrounded by borated polyethylene shields and the experiment is performed using a source pulse frequency of nearly 130 Hz. As the basis for the benchmark, the calculated time dependent reaction rates in the detectors (using the MCNP code and its associated ENDF-B/VI thermal neutron scattering libraries) are compared to measured

  11. The effect of straggling on the slowing down of neutrons in radiation protection

    International Nuclear Information System (INIS)

    Mostacci, D.; Molinari, V.; Teodori, F.; Pesic, M.

    1999-01-01

    All those techniques developed to describe neutron transport that rely on the flux isotropy conditions prevailing within the reactor core can be of no help in the study of neutron beams. Two main problems must be solved in investigating beams: determining the relevant cross-sections and solving the transport equation. Often in addressing neutron radiation protection problems, the available cross-section data are extremely detailed whereas the transport equations used are rather unrefined, making wide use of the continuous slowing down approximation to calculate stopping powers (e.g., Bethe's expressions). In this paper a simple approach to calculating stopping power and range is presented, that takes into account the effect of neutron energy straggling. Comparison with MCNP results is also presented. (author)

  12. Measurement of fission cross section with pure Am-243 sample using lead slowing-down spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Katsuhei; Yamamoto, Shuji; Kai, T.; Fujita, Yoshiaki; Yamamoto, Hideki; Kimura, Itsuro [Kyoto Univ. (Japan); Shinohara, Nobuo

    1997-03-01

    By making use of back-to-back type double fission chambers and a lead slowing-down spectrometer coupled to an electron linear accelerator, the fission cross section for the {sup 243}Am(n,f) reaction has been measured relative to that for the {sup 235}U(n,f) reaction in the energy range from 0.1 eV to 10 keV. The measured result was compared with the evaluated nuclear data appeared in ENDF/B-VI and JENDL-3.2, whose evaluated data were broadened by the energy resolution function of the spectrometer. General agreement was seen between the evaluated data and the measurement except that the ENDF/B-VI data were lower in the range from 15 to 60 eV and that the JENDL-3.2 data seemed to be lower above 100 eV. (author)

  13. Exponential discontinuous numerical scheme for electron transport in the continuous slowing down approximation

    International Nuclear Information System (INIS)

    Prinja, A.K.

    1997-01-01

    A nonlinear discretization scheme in space and energy, based on the recently developed exponential discontinuous method, is applied to continuous slowing down dominated electron transport (i.e., in the absence of scattering.) Numerical results for dose and charge deposition are obtained and compared against results from the ONELD and ONEBFP codes, and against exact results from an adjoint Monte Carlo code. It is found that although the exponential discontinuous scheme yields strictly positive and monotonic solutions, the dose profile is considerably straggled when compared to results from the linear codes. On the other hand, the linear schemes produce negative results which, furthermore, do not damp effectively in some cases. A general conclusion is that while yielding strictly positive solutions, the exponential discontinuous method does not show the crude cell accuracy for charged particle transport as was apparent for neutral particle transport problems

  14. Episodic memory deficits slow down the dynamics of cognitive procedural learning in normal ageing

    Science.gov (United States)

    Beaunieux, Hélène; Hubert, Valérie; Pitel, Anne Lise; Desgranges, Béatrice; Eustache, Francis

    2009-01-01

    Cognitive procedural learning is characterized by three phases, each involving distinct processes. Considering the implication of the episodic memory in the first cognitive stage, the impairment of this memory system might be responsible for a slowing down of the cognitive procedural learning dynamics in the course of aging. Performances of massed cognitive procedural learning were evaluated in older and younger participants using the Tower of Toronto task. Nonverbal intelligence and psychomotor abilities were used to analyze procedural dynamics, while episodic memory and working memory were assessed to measure their respective contributions to learning strategies. This experiment showed that older participants did not spontaneously invoke episodic memory and presented a slowdown in the cognitive procedural learning associated with a late involvement of working memory. These findings suggest that the slowdown in the cognitive procedural learning may be linked with the implementation of different learning strategies less involving episodic memory in older subjects. PMID:18654928

  15. Critical slowing down of spin fluctuations in BiFeO3

    International Nuclear Information System (INIS)

    Scott, J F; Singh, M K; Katiyar, R S

    2008-01-01

    In earlier work we reported the discovery of phase transitions in BiFeO 3 evidenced by divergences in the magnon light-scattering cross-sections at 140 and 201 K (Singh et al 2008 J. Phys.: Condens. Matter 20 252203) and fitted these intensity data to critical exponents α = 0.06 and α' = 0.10 (Scott et al 2008 J. Phys.: Condens. Matter 20 322203), under the assumption that the transitions are strongly magnetoelastic (Redfern et al 2008 at press) and couple to strain divergences through the Pippard relationship (Pippard 1956 Phil. Mag. 1 473). In the present paper we extend those criticality studies to examine the magnon linewidths, which exhibit critical slowing down (and hence linewidth narrowing) of spin fluctuations. The linewidth data near the two transitions are qualitatively different and we cannot reliably extract a critical exponent ν, although the mean field value ν = 1/2 gives a good fit near the lower transition.

  16. ACTIV, Sandwich Detector Activity from In-Pile Slowing-Down Spectra Experiment

    International Nuclear Information System (INIS)

    Bozzi, L. and others

    1978-01-01

    1 - Nature of physical problem solved: Calculates the activities of a sandwich detector, to be used for in-pile measurements in slowing-down spectra below a few keV. The effect of scattering with energy degradation in the filter and in the detectors has been included to a first approximation. 2 - Method of solution: An iterative procedure is used: the calculation starts with a flux guess in which one assumes that each measured reactivity difference depends on the principal resonance only. The secondary resonance contribution is computed through the iterative process. For self-shielded cross-section calculations the model of Pearlstein and Weinstock (ref. 3) is used. The neutron spectrum can optionally be constant or 1/E inside each finite energy group relative to the resonance considered. 3 - Restrictions on the complexity of the problem: Maximum number of energy groups : 60

  17. ROLAIDS-CPM, 1-D Slowing-Down by Collision Problems Method

    International Nuclear Information System (INIS)

    De Kruijf, W.J.M.

    2002-01-01

    1 - Description of program or function: ROLAIDS-CPM is based on the AMPX-module ROLAIDS (PSR-315). CPM stands for Collision Probability Method. ROLAIDS is a one-dimensional slowing-down code which uses the interface currents method. ROLAIDS-CPM does not need the assumption of cosine currents at the interface of the zones. Extensions: collision probability method for slab and cylinder geometry; different temperatures for a nuclide can be used; flat lethargy source can be modelled. 2 - Method of solution: Collision probabilities in cylindrical geometry are calculated according to the Carlvik Method. This means that a Gauss integration is used. 3 - Restrictions on the complexity of the problem: The maximum number of zones is 30 for the collision probability method

  18. Experimental and theoretical study of heavy ion slowing down in solid targets

    International Nuclear Information System (INIS)

    Mehana, A.

    1993-06-01

    Heavy ion energy losses in C, Al, Cu, Ag, Ta and Au solid targets have been measured at high energy (0.2 to 5 MeV/u), using the backward secondary ion technique, and at low energy (0.1 to 0.25 MeV/u) for the C, N and O ions, using the particle backscatter method. A brief review of the various matter-induced charged particle slowing down theories, and especially the Lindhard dielectric theory, is first presented. Then, the various models for the evaluation of the effective charge and of the high order correction, are discussed and compared. Experimental techniques and data processing methods are described, and the experimental results are compared to calculations derived from the dielectric theory. In particular, the effective charges and the high order corrections (Barkas-Bloch) are examined and compared to the models for the determination of the z 3 and z 4 terms for heavy ions

  19. Evaluation of Shielding Wall Optimization in Lead Slowing Down Spectrometer System

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Ju Young; Kim, Jeong Dong; Lee, Yong Deok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    A Lead Slowing Down Spectrometer (LSDS) system is nondestructive technology for analyzing isotope fissile content in spent fuel and pyro processed material, in real time and directly. The high intensity neutron and gamma ray were generated from a nuclear material (Pyro, Spent nuclear fuel), electron beam-target reaction and fission of fissile material. Therefore, shielding analysis of LSDS system should be carried out. In this study, Borax, B{sub 4}C, Li{sub 2}Co{sub 3}, Resin were chosen for shielding analysis. The radiation dose limit (<0.1 μSv/hr) was adopted conservatively at the outer wall surface. The covering could be able to reduce the concrete wall thickness from 5cm to 15cm. The optimized shielding walls evaluation will be used as an important data for future real LSDS facility design and shielding door assessment.

  20. Analysis of spent fuel assay with a lead slowing down spectrometer

    International Nuclear Information System (INIS)

    Gavron, A.; Smith, L. Eric; Ressler, Jennifer J.

    2009-01-01

    Assay of fissile materials in spent fuel that are produced or depleted during the operation of a reactor, is of paramount importance to nuclear materials accounting, verification of the reactor operation history, as well as for criticality considerations for storage. In order to prevent future proliferation following the spread of nuclear energy, we must develop accurate methods to assay large quantities of nuclear fuels. We analyze the potential of using a Lead Slowing Down Spectrometer for assaying spent fuel. We conclude that it possible to design a system that will provide around 1% statistical precision in the determination of the 239 Pu, 241 Pu and 235 U concentrations in a PWR spent-fuel assembly, for intermediate-to-high burnup levels, using commercial neutron sources, and a system of 238 U threshold fission detectors. Pending further analysis of systematic errors, it is possible that missing pins can be detected, as can asymmetry in the fuel bundle. (author)

  1. Slowing down tail enhanced, neoclassical and classical alpha particle fluxes in tokamak reactors

    International Nuclear Information System (INIS)

    Catto, P.J.; Tessarotto, M.

    1988-01-01

    The classical and neoclassical particle and energy fluxes associated with a slowing down tail, alpha particle distribution function are evaluated for arbitrary aspect ratio ε -1 , cross section, and poloidal magnetic field. The retention of both electron and ion drag and pitch angle scattering by the background ions results in a large diffusive neoclassical heat flux in the plasma core. This flux remains substantial at larger radii only if the characteristic speed associated with pitch angle scattering, v/sub b/, is close enough to the alpha birth speed v 0 so that ε(v 0 /v/sub b/) 3 remains less than some order unity critical value which is not determined by the methods herein. The enhanced neoclassical losses would only have a serious impact on ignition if the critical value of ε(v 0 /v/sub b/) 3 is found to be somewhat larger than unity

  2. Fission cross section measurement of Am-242m using lead slowing-down spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Kai, Tetsuya; Kobayashi, Katsuhei; Yamamoto, Shuji; Fujita, Yoshiaki [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.; Kimura, Itsuro; Ohkawachi, Yasushi; Wakabayashi, Toshio

    1998-03-01

    By making use of double fission chamber and lead slowing-down spectrometer coupled to an electron linear accelerator, fission cross section for the {sup 242m}Am(n,f) reaction has been measured relative to that for the {sup 235}U(n,f) reaction in the energy range from 0.1 eV to 10 keV. The measured result was compared with the evaluated nuclear data appeared in ENDF/B-VI and JENDL-3.2, of which evaluated data were broadened by the energy resolution function of the spectrometer. Although the JENDL-3.2 data seem to be a little smaller than the present measurement, good agreement can be seen in the general shape and the absolute values. The ENDF/B-VI data are larger more than 50 % than the present values above 3 eV. (author)

  3. Study of the neutron slowing-down in graphite; Etude du ralentissement des neutrons dans le graphite

    Energy Technology Data Exchange (ETDEWEB)

    Martelly, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Duggal, V P [Commission a l' Energie atomique indienne (India)

    1955-07-01

    Study of the slowing-down age of neutrons in graphite. In the aim to eliminate the effect of resonance neutrons on the detector, neutrons captured by cadmium are studied with a classic method consisting of calculating the difference between the activity measured with and without screens. This method is described and screening properties of cadmium and gadolinium are compared. The experimental parameters and detectors details are described. The radioactive source is Ra-{alpha}-Be. The experimental results are given and the experimental distribution is compared with theoretical formula. In a second part, the spatial distribution of resonance neutrons in indium is discussed. Finally, the neutron slowing-down between the indium resonance and the thermal equilibrium is discussed as well as the research for the effective value of slowing-down age. The slowing-down age definition is given before calculating its effective value. It compared the slowing-down law with the experiment and the Gurney theory. (M.P.)

  4. On the group approximation errors in description of neutron slowing-down at large distances from a source. Diffusion approach

    International Nuclear Information System (INIS)

    Kulakovskij, M.Ya.; Savitskij, V.I.

    1981-01-01

    The errors of multigroup calculating the neutron flux spatial and energy distribution in the fast reactor shield caused by using group and age approximations are considered. It is shown that at small distances from a source the age theory rather well describes the distribution of the slowing-down density. With the distance increase the age approximation leads to underestimating the neutron fluxes, and the error quickly increases at that. At small distances from the source (up to 15 lengths of free path in graphite) the multigroup diffusion approximation describes the distribution of slowing down density quite satisfactorily and at that the results almost do not depend on the number of groups. With the distance increase the multigroup diffusion calculations lead to considerable overestimating of the slowing-down density. The conclusion is drawn that the group approximation proper errors are opposite in sign to the error introduced by the age approximation and to some extent compensate each other

  5. Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY11 Status Report

    International Nuclear Information System (INIS)

    Warren, Glen A.; Casella, Andrew M.; Haight, R.C.; Anderson, Kevin K.; Danon, Yaron; Hatchett, D.; Becker, Bjorn; Devlin, M.; Imel, G.R.; Beller, D.; Gavron, A.; Kulisek, Jonathan A.; Bowyer, Sonya M.; Gesh, Christopher J.; O'Donnell, J.M.

    2011-01-01

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R and D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 collaboration activities. Progress made by the collaboration in FY2011 continues to indicate the promise of LSDS techniques applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model demonstrated the potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space. Similar results were obtained using a perturbation approach developed by LANL. Benchmark measurements have been successfully conducted at LANL and at RPI using their respective LSDS instruments. The ISU and UNLV collaborative effort is focused on the fabrication and testing of prototype fission chambers lined with ultra-depleted 238U and 232Th, and uranium deposition on a stainless steel disc using spiked U3O8 from room temperature ionic liquid was successful, with improving thickness obtained. In FY2012, the collaboration plans a broad array of activities. PNNL will focus on optimizing its empirical model and minimizing its reliance on calibration data, as well continuing efforts on developing an analytical model. Additional measurements are

  6. Spectral history correction of microscopic cross sections for the PBR using the slowing down balance

    International Nuclear Information System (INIS)

    Hudson, N.; Rahnema, F.; Ougouag, A. M.; Gougar, H. D.

    2006-01-01

    A method has been formulated to account for depletion effects on microscopic cross sections within a Pebble Bed Reactor (PBR) spectral zone without resorting to calls to the spectrum (cross section generation) code or relying upon table interpolation between data at different values of burnup. In this method, infinite medium microscopic cross sections, fine group fission spectra, and modulation factors are pre-computed at selected isotopic states. This fine group information is used with the local spectral zone nuclide densities to generate new cross sections for each spectral zone. The local spectrum used to generate these microscopic cross sections is estimated through the solution to the cell-homogenized, infinite medium slowing down balance equation during the flux calculation. This technique is known as Spectral History Correction (SHC), and it is formulated to specifically account for burnup within a spectral zone. It was found that the SHC technique accurately calculates local broad group microscopic cross sections with local burnup information. Good agreement is obtained with cross sections generated directly by the cross section generator. Encouraging results include improvement in the converged fuel cycle eigenvalue, the power peaking factor, and the flux. It was also found that the method compared favorably to the benchmark problem in terms of the computational speed. (authors)

  7. Oligotrophication and Metabolic Slowing-Down of a NW Mediterranean Coastal Ecosystem

    KAUST Repository

    Agusti, Susana

    2017-12-22

    Increased oligotrophication is expected for oligotrophic areas as a consequence of ocean warming, which reduces diffusive vertical nutrient supply due to strengthened stratification. Evidence of ocean oligotrophication has been, thus far, reported for the open ocean. Here we reported oligotrophication and associated changes in plankton community metabolism with warming in a pristine, oligotrophic Mediterranean coastal area (Cap Salines, Mallorca Island, Spain) during a 10 years time series. As a temperate area, there were seasonal patterns associated to changes in the broad temperature range (12.0–28.4°C), with a primary phytoplankton bloom in late winter and a secondary one in the fall. Community respiration (R) rates peaked during summers and showed higher rates relative to gross primary production (GPP) with a prevalence of heterotrophic metabolism (2/3\\'s of net community production (NCP) estimates). Chlorophyll a concentration significantly decreased with increasing water temperature in the coastal site at a rate of 0.014 ± 0.003 μg Chla L−1 °C−1 (P < 0.0001). The study revealed a significant decrease with time in Chlorophyll a concentration and nutrients concentration, indicating oligotrophication during the last decade. Community productivity consistently decreased with time as both GPP and R showed a significant decline. Warming of the Mediterranean Sea is expected to increase plankton metabolic rates, but the results indicated that the associated oligotrophication must lead to a slowing down of the community metabolism.

  8. Neutron slowing down and transport in a medium of constant cross section. I. Spatial moments

    International Nuclear Information System (INIS)

    Cacuci, D.G.; Goldstein, H.

    1977-01-01

    Some aspects of the problem of neutron slowing down and transport have been investigated in an infinite medium consisting of a single nuclide scattering elastically and isotropically without absorption and with energy-independent cross sections. The method of singular eigenfunctions has been applied to the Boltzmann equation governing the Laplace transform (with respect to the lethargy variable) of the neutron flux. Formulas have been obtained for the lethargy dependent spatial moments of the scalar flux applicable in the limit of large lethargy. In deriving these formulas, use has been made of the well-known connection between the spatial moments of the Laplace-transformed scalar flux and the moments of the flux in the ''eigenvalue space.'' The calculations have been greatly aided by the construction of a closed general expression for these ''eigenvalue space'' moments. Extensive use has also been made of the methods of combinatorial analysis and of computer evaluation, via FORMAC, of complicated sequences of manipulations. It has been possible to obtain for materials of any atomic weight explicit corrections to the age theory formulas for the spatial moments M/sub 2n/(u), of the scalar flux, valid through terms of order of u -5 . Higher order correction terms could be obtained at the expense of additional computer time. The evaluation of the coefficients of the powers of n, as explicit functions of the nuclear mass, represent the end product of this investigation

  9. Third generation of lead slowing-down spectrometers: First results and prospects

    International Nuclear Information System (INIS)

    Alexeev, A.A.; Belousov, Yu.V.; Bergman, A.A.; Volkov, A.N.; Goncharenko, O.N.; Grachev, M.N.; Kazarnovsky, M.V.; Matushkov, V.L.; Mostovoy, V.I.; Novikov, A.V.; Novoselov, S.A.; Ryabov, Yu.V.; Stavissky, Yu.Ya.; Gledenov, Yu.M.; Parzhitski, S.S.; Popov, Yu.P.

    1999-01-01

    At the same neutron-source intensity S, lead slowing-down spectrometers (LSDS) for neutrons possess luminosity 10 3 -10 4 times as great as that of time-of-flight spectrometers with the same energy resolution ΔE/E(bar sign) (for the former, ΔE/E(bar sign)≅35-45% at a mean energy E(bar sign) less than 50 keV). In combination with high-current proton accelerators, third-generation LSDSs can operate at S values exceeding those acceptable for second-generation LSDSs coupled to electron accelerators by a factor of 10 2 to 10 3 . At the Institute for Nuclear Research (Russian Academy of Sciences, Moscow), the first third-generation LSDS facility called PITON (about 15 tons of lead) has been operating successfully and a large LSDS (more than 100 tons of lead) is now under construction. The results of the first experiments at the PITON facility are presented, and the experimental programs for both facilities is outlined

  10. Assaying Used Nuclear Fuel Assemblies Using Lead Slowing-Down Spectroscopy and Singular Value Decomposition

    International Nuclear Information System (INIS)

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.

    2013-01-01

    This study investigates the use of a Lead Slowing-Down Spectrometer (LSDS) for the direct and independent measurement of fissile isotopes in light-water nuclear reactor fuel assemblies. The current study applies MCNPX, a Monte Carlo radiation transport code, to simulate the measurement of the assay of the used nuclear fuel assemblies in the LSDS. An empirical model has been developed based on the calibration of the LSDS to responses generated from the simulated assay of six well-characterized fuel assemblies. The effects of self-shielding are taken into account by using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the self-shielding functions from the assay of assemblies in the calibration set. The performance of the empirical algorithm was tested on version 1 of the Next-Generation Safeguards Initiative (NGSI) used fuel library consisting of 64 assemblies, as well as a set of 27 diversion assemblies, both of which were developed by Los Alamos National Laboratory. The potential for direct and independent assay of the sum of the masses of Pu-239 and Pu-241 to within 2%, on average, has been demonstrated

  11. Timing of the Crab pulsar III. The slowing down and the nature of the random process

    International Nuclear Information System (INIS)

    Groth, E.J.

    1975-01-01

    The Crab pulsar arrival times are analyzed. The data are found to be consistent with a smooth slowing down with a braking index of 2.515+-0.005. Superposed on the smooth slowdown is a random process which has the same second moments as a random walk in the frequency. The strength of the random process is R 2 >=0.53 (+0.24, -0.12) x10 -22 Hz 2 s -1 , where R is the mean rate of steps and 2 > is the second moment of the step amplitude distribution. Neither the braking index nor the strength of the random process shows evidence of statistically significant time variations, although small fluctuations in the braking index and rather large fluctuations in the noise strength cannot be ruled out. There is a possibility that the random process contains a small component with the same second moments as a random walk in the phase. If so, a time scale of 3.5 days is indicated

  12. Oligotrophication and Metabolic Slowing-Down of a NW Mediterranean Coastal Ecosystem

    Directory of Open Access Journals (Sweden)

    Susana Agusti

    2017-12-01

    Full Text Available Increased oligotrophication is expected for oligotrophic areas as a consequence of ocean warming, which reduces diffusive vertical nutrient supply due to strengthened stratification. Evidence of ocean oligotrophication has been, thus far, reported for the open ocean. Here we reported oligotrophication and associated changes in plankton community metabolism with warming in a pristine, oligotrophic Mediterranean coastal area (Cap Salines, Mallorca Island, Spain during a 10 years time series. As a temperate area, there were seasonal patterns associated to changes in the broad temperature range (12.0–28.4°C, with a primary phytoplankton bloom in late winter and a secondary one in the fall. Community respiration (R rates peaked during summers and showed higher rates relative to gross primary production (GPP with a prevalence of heterotrophic metabolism (2/3's of net community production (NCP estimates. Chlorophyll a concentration significantly decreased with increasing water temperature in the coastal site at a rate of 0.014 ± 0.003 μg Chla L−1 °C−1 (P < 0.0001. The study revealed a significant decrease with time in Chlorophyll a concentration and nutrients concentration, indicating oligotrophication during the last decade. Community productivity consistently decreased with time as both GPP and R showed a significant decline. Warming of the Mediterranean Sea is expected to increase plankton metabolic rates, but the results indicated that the associated oligotrophication must lead to a slowing down of the community metabolism.

  13. ADVANCEMENTS IN TIME-SPECTRA ANALYSIS METHODS FOR LEAD SLOWING-DOWN SPECTROSCOPY

    International Nuclear Information System (INIS)

    Smith, Leon E.; Anderson, Kevin K.; Gesh, Christopher J.; Shaver, Mark W.

    2010-01-01

    Direct measurement of Pu in spent nuclear fuel remains a key challenge for safeguarding nuclear fuel cycles of today and tomorrow. Lead slowing-down spectroscopy (LSDS) is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic mass with an uncertainty lower than the approximately 10 percent typical of today's confirmatory assay methods. Pacific Northwest National Laboratory's (PNNL) previous work to assess the viability of LSDS for the assay of pressurized water reactor (PWR) assemblies indicated that the method could provide direct assay of Pu-239 and U-235 (and possibly Pu-240 and Pu-241) with uncertainties less than a few percent, assuming suitably efficient instrumentation, an intense pulsed neutron source, and improvements in the time-spectra analysis methods used to extract isotopic information from a complex LSDS signal. This previous simulation-based evaluation used relatively simple PWR fuel assembly definitions (e.g. constant burnup across the assembly) and a constant initial enrichment and cooling time. The time-spectra analysis method was founded on a preliminary analytical model of self-shielding intended to correct for assay-signal nonlinearities introduced by attenuation of the interrogating neutron flux within the assembly.

  14. The considering of the slowing down effect in the formalism of probability tables. Application to the effective cross section calculation

    International Nuclear Information System (INIS)

    Bouhelal, O.K.A.

    1990-01-01

    The exact determination of the effective multigroup cross sections imposes the numerical solution of the slowing down equation on a very fine energy mesh. Given the complexity of these calculations, different approximation methods have been developed but without a satisfactory treatment of the slowing-down effect. The usual methods are essentially based on interpolations using precalculated tables. The models that use the probability tables allow to reduce the amount of data and the computational effort. A variety of methods proposed by Soviets, then by Americans, and finally the French method, based on the ''moments of a probability distribution'' are incontestably valid within the framework of the statistical hypothesis. This stipulates that the collision densities do not depend on cross section and there is no ambiguity in the effective cross section calculation. The objective of our work is to show that the non statistical phenomena, such as the slowing-down effect which is taken into account, can be described by probability tables which are able to represent the neutronic values and collision densities. The formalism involved in the statistical hypothesis, is based on the Gauss quadrature of the cross sections moments. In the non-statistical hypothesis we introduce the crossed probability tables using the quadratures of double integrals of cross sections, comments. Moreover, a mathematical formalism allowing to establish a relationship between the crossed probability tables and the collision densities was developed. This method was applied on uranium-238 in the range of resolved resonances where the slowing down effect is significant. Validity of the method and the analysis of the obtained results are studied through a reference calculation based on a solution of a discretized slowing down equation using a very fine mesh in which each microgroup can be correctly defined via the statistical probability tables. 42 figs., 32 tabs., 49 refs. (author)

  15. Nuclear lifetimes and the slowing down of heavy ions in solids

    International Nuclear Information System (INIS)

    Scherpenzeel, D.E.C.

    1981-01-01

    Nuclear lifetime measurements by means of the Doppler Shift Attenuation (DSA) method at low recoil velocities (β approximately less than 0.01) are notoriously difficult due to the observed strong dependence of the extracted lifetimes on the slowing-down material at low initial velocities. This is mainly caused by the lack of reliable stopping power data for these velocities and the absence of an adequate theory to compensate for that. This problem of the determination of the correct mean life for the lowest Jsup(π) = 4 + state of 22 Ne is solved by measurements with the coincident high-velocity DSA method. Excited nuclei of high initial velocity [β(0) approximately 0.05] are generated by the bombardment of light targets, such as 1 H, 2 H, 3 H and 4 He, with beams of heavy ions. The combination of high initial velocity and coincidence restriction offers many advantages over the conventional techniques. The coincident high-velocity DSA method is also used to determine mean lives of low-lying excited states of the silicon isotopes 28 29 30 Si. The observed Doppler patterns are analyzed with experimental stopping powers and the resulting mean lives range from about 25 fs to 4 ps. The mean lives of the first excited state of 18 O and some low-lying levels of 35 S are determined from Doppler patterns analyzed with experimental stopping powers. The present stopping results for O, Si and S ions in Mg are also analyzed in terms of the effective charge concept. It is concluded that at the present level of accuracy of about 5 % the obtained results are consistent with this concept. (Auth.)

  16. Design optimization of radiation shielding structure for lead slowing-down spectrometer system

    International Nuclear Information System (INIS)

    Kim, Jeong Dong; Ahn, Sang Joon; Lee, Yong Deok; Park, Chang Je

    2015-01-01

    A lead slowing-down spectrometer (LSDS) system is a promising nondestructive assay technique that enables a quantitative measurement of the isotopic contents of major fissile isotopes in spent nuclear fuel and its pyroprocessing counterparts, such as 235U, 239Pu, 241Pu, and, potentially, minor actinides. The LSDS system currently under development at the Korea Atomic Energy Research Institute (Daejeon, Korea) is planned to utilize a high-flux (>101 2n /cm 2 ·s) neutron source comprised of a high-energy (30 MeV)/high-current (∼2 A) electron beam and a heavy metal target, which results in a very intense and complex radiation field for the facility, thus demanding structural shielding to guarantee the safety. Optimization of the structural shielding design was conducted using MCNPX for neutron dose rate evaluation of several representative hypothetical designs. In order to satisfy the construction cost and neutron attenuation capability of the facility, while simultaneously achieving the aimed dose rate limit (<0.06 μSv/h), a few shielding materials [high-density polyethylene (HDPE)–Borax, B 4 C, and Li 2 CO 3 ] were considered for the main neutron absorber layer, which is encapsulated within the double-sided concrete wall. The MCNP simulation indicated that HDPE-Borax is the most efficient among the aforementioned candidate materials, and the combined thickness of the shielding layers should exceed 100 cm to satisfy the dose limit on the outside surface of the shielding wall of the facility when limiting the thickness of the HDPE-Borax intermediate layer to below 5 cm. However, the shielding wall must include the instrumentation and installation holes for the LSDS system. The radiation leakage through the holes was substantially mitigated by adopting a zigzag-shape with concrete covers on both sides. The suggested optimized design of the shielding structure satisfies the dose rate limit and can be used for the construction of a facility in the near future.

  17. Confinement of ripple-trapped slowing-down ions by a radial electric field

    International Nuclear Information System (INIS)

    Herrmann, W.

    1998-03-01

    Weakly collisional ions trapped in the toroidal field ripples at the outer plasma edge can be prevented to escape the plasma due to grad B-drift by a counteracting radial electric field. This leads to an increase in the density of ripple-trapped ions, which can be monitored by the analysis of charge exchange neutrals. The minimum radial electric field E r necessary to confine ions with energy E and charge q (q=-1: charge of the electron) is E r = -E/(q * R), where R is the major radius at the measuring point. Slowing-down ions from neutral injection are usually in the right energy range to be sufficiently collisionless in the plasma edge and show the confinement by radial electric fields in the range of tens of kV/m. The density of banana ions is almost unaffected by the radial electric field. Neither in L/H- nor in H/L-transitions does the density of ripple-trapped ions and, hence, the neutral particle fluxes, show jumps in times shorter than 1 ms. According to [1,2] the response time of the density and the fluxes to a sudden jump in the radial electric field is less than 200 μs, if the halfwidth of the electric field is larger or about 2 cm. This would exclude rapid jumps in the radial electric field at the transition. Whether the halfwidth of the electric field is that large during transition cannot be decided from the measurement of the fluxes alone. (orig.)

  18. Finite-difference solution of the space-angle-lethargy-dependent slowing-down transport equation

    Energy Technology Data Exchange (ETDEWEB)

    Matausek, M V [Boris Kidric Vinca Institute of Nuclear Sciences, Vinca, Belgrade (Yugoslavia)

    1972-07-01

    A procedure has been developed for solving the slowing-down transport equation for a cylindrically symmetric reactor system. The anisotropy of the resonance neutron flux is treated by the spherical harmonics formalism, which reduces the space-angle-Iethargy-dependent transport equation to a matrix integro-differential equation in space and lethargy. Replacing further the lethargy transfer integral by a finite-difference form, a set of matrix ordinary differential equations is obtained, with lethargy-and space dependent coefficients. If the lethargy pivotal points are chosen dense enough so that the difference correction term can be ignored, this set assumes a lower block triangular form and can be solved directly by forward block substitution. As in each step of the finite-difference procedure a boundary value problem has to be solved for a non-homogeneous system of ordinary differential equations with space-dependent coefficients, application of any standard numerical procedure, for example, the finite-difference method or the method of adjoint equations, is too cumbersome and would make the whole procedure practically inapplicable. A simple and efficient approximation is proposed here, allowing analytical solution for the space dependence of the spherical-harmonics flux moments, and hence the derivation of the recurrence relations between the flux moments at successive lethargy pivotal points. According to the procedure indicated above a computer code has been developed for the CDC -3600 computer, which uses the KEDAK nuclear data file. The space and lethargy distribution of the resonance neutrons can be computed in such a detailed fashion as the neutron cross-sections are known for the reactor materials considered. The computing time is relatively short so that the code can be efficiently used, either autonomously, or as part of some complex modular scheme. Typical results will be presented and discussed in order to prove and illustrate the applicability of the

  19. Isotopic fissile assay of spent fuel in a lead slowing-down spectrometer system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Deok; Jeon, Ju Young [Dept. of Fuel Cycle Technology, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Park, Chang Je [Dept. of Nuclear Engineering, Sejong University, Seoul (Korea, Republic of)

    2017-04-15

    A lead slowing-down spectrometer (LSDS) system is under development to analyze isotopic fissile content that is applicable to spent fuel and recycled material. The source neutron mechanism for efficient and effective generation was also determined. The source neutron interacts with a lead medium and produces continuous neutron energy, and this energy generates dominant fission at each fissile, below the unresolved resonance region. From the relationship between the induced fissile fission and the fast fission neutron detection, a mathematical assay model for an isotopic fissile material was set up. The assay model can be expanded for all fissile materials. The correction factor for self-shielding was defined in the fuel assay area. The corrected fission signature provides well-defined fission properties with an increase in the fissile content. The assay procedure was also established. The assay energy range is very important to take into account the prominent fission structure of each fissile material. Fission detection occurred according to the change of the Pu239 weight percent (wt%), but the content of U235 and Pu241 was fixed at 1 wt%. The assay result was obtained with 2∼3% uncertainty for Pu239, depending on the amount of Pu239 in the fuel. The results show that LSDS is a very powerful technique to assay the isotopic fissile content in spent fuel and recycled materials for the reuse of fissile materials. Additionally, a LSDS is applicable during the optimum design of spent fuel storage facilities and their management. The isotopic fissile content assay will increase the transparency and credibility of spent fuel storage.

  20. Design optimization of radiation shielding structure for lead slowing-down spectrometer system

    Directory of Open Access Journals (Sweden)

    Jeong Dong Kim

    2015-04-01

    Full Text Available A lead slowing-down spectrometer (LSDS system is a promising nondestructive assay technique that enables a quantitative measurement of the isotopic contents of major fissile isotopes in spent nuclear fuel and its pyroprocessing counterparts, such as 235U, 239Pu, 241Pu, and, potentially, minor actinides. The LSDS system currently under development at the Korea Atomic Energy Research Institute (Daejeon, Korea is planned to utilize a high-flux (>1012 n/cm2·s neutron source comprised of a high-energy (30 MeV/high-current (∼2 A electron beam and a heavy metal target, which results in a very intense and complex radiation field for the facility, thus demanding structural shielding to guarantee the safety. Optimization of the structural shielding design was conducted using MCNPX for neutron dose rate evaluation of several representative hypothetical designs. In order to satisfy the construction cost and neutron attenuation capability of the facility, while simultaneously achieving the aimed dose rate limit (<0.06 μSv/h, a few shielding materials [high-density polyethylene (HDPE–Borax, B4C, and Li2CO3] were considered for the main neutron absorber layer, which is encapsulated within the double-sided concrete wall. The MCNP simulation indicated that HDPE-Borax is the most efficient among the aforementioned candidate materials, and the combined thickness of the shielding layers should exceed 100 cm to satisfy the dose limit on the outside surface of the shielding wall of the facility when limiting the thickness of the HDPE-Borax intermediate layer to below 5 cm. However, the shielding wall must include the instrumentation and installation holes for the LSDS system. The radiation leakage through the holes was substantially mitigated by adopting a zigzag-shape with concrete covers on both sides. The suggested optimized design of the shielding structure satisfies the dose rate limit and can be used for the construction of a facility in the near

  1. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY12 Status Report

    Energy Technology Data Exchange (ETDEWEB)

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Siciliano, Edward R.; Warren, Glen A.

    2012-09-28

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory methods. This document is a progress report for FY2012 PNNL analysis and algorithm development. Progress made by PNNL in FY2012 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel assemblies. PNNL further refined the semi-empirical model developed in FY2011 based on singular value decomposition (SVD) to numerically account for the effects of self-shielding. The average uncertainty in the Pu mass across the NGSI-64 fuel assemblies was shown to be less than 3% using only six calibration assemblies with a 2% uncertainty in the isotopic masses. When calibrated against the six NGSI-64 fuel assemblies, the algorithm was able to determine the total Pu mass within <2% uncertainty for the 27 diversion cases also developed under NGSI. Two purely empirical algorithms were developed that do not require the use of Pu isotopic fission chambers. The semi-empirical and purely empirical algorithms were successfully tested using MCNPX simulations as well applied to experimental data measured by RPI using their LSDS. The algorithms were able to describe the 235U masses of the RPI measurements with an average uncertainty of 2.3%. Analyses were conducted that provided valuable insight with regard to design requirements (e

  2. Design optimization of radiation shielding structure for lead slowing-down spectrometer system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jeong Dong; Ahn, Sang Joon; Lee, Yong Deok [Nonproliferation System Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Park, Chang Je [Dept. of Nuclear Engineering, Sejong University, Seoul (Korea, Republic of)

    2015-04-15

    A lead slowing-down spectrometer (LSDS) system is a promising nondestructive assay technique that enables a quantitative measurement of the isotopic contents of major fissile isotopes in spent nuclear fuel and its pyroprocessing counterparts, such as 235U, 239Pu, 241Pu, and, potentially, minor actinides. The LSDS system currently under development at the Korea Atomic Energy Research Institute (Daejeon, Korea) is planned to utilize a high-flux (>101{sup 2n}/cm{sup 2}·s) neutron source comprised of a high-energy (30 MeV)/high-current (∼2 A) electron beam and a heavy metal target, which results in a very intense and complex radiation field for the facility, thus demanding structural shielding to guarantee the safety. Optimization of the structural shielding design was conducted using MCNPX for neutron dose rate evaluation of several representative hypothetical designs. In order to satisfy the construction cost and neutron attenuation capability of the facility, while simultaneously achieving the aimed dose rate limit (<0.06 μSv/h), a few shielding materials [high-density polyethylene (HDPE)–Borax, B{sub 4}C, and Li{sub 2}CO{sub 3}] were considered for the main neutron absorber layer, which is encapsulated within the double-sided concrete wall. The MCNP simulation indicated that HDPE-Borax is the most efficient among the aforementioned candidate materials, and the combined thickness of the shielding layers should exceed 100 cm to satisfy the dose limit on the outside surface of the shielding wall of the facility when limiting the thickness of the HDPE-Borax intermediate layer to below 5 cm. However, the shielding wall must include the instrumentation and installation holes for the LSDS system. The radiation leakage through the holes was substantially mitigated by adopting a zigzag-shape with concrete covers on both sides. The suggested optimized design of the shielding structure satisfies the dose rate limit and can be used for the construction of a facility in

  3. First test experiment to produce the slowed-down RI beam with the momentum-compression mode at RIBF

    Energy Technology Data Exchange (ETDEWEB)

    Sumikama, T., E-mail: sumikama@ribf.riken.jp [RIKEN Nishina Center, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Department of Physics, Tohoku University, Aoba, Sendai 980-8578 (Japan); Ahn, D.S.; Fukuda, N.; Inabe, N.; Kubo, T.; Shimizu, Y.; Suzuki, H.; Takeda, H. [RIKEN Nishina Center, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Aoi, N. [Research Center for Nuclear Physics, Osaka University, Ibaraki, Osaka 567-0047 (Japan); Beaumel, D. [Institut de Physique Nucléaire d’Orsay (IPNO), CNRS/IN2P3, 91405 Orsay (France); Hasegawa, K. [Department of Physics, Tohoku University, Aoba, Sendai 980-8578 (Japan); Ideguchi, E. [Research Center for Nuclear Physics, Osaka University, Ibaraki, Osaka 567-0047 (Japan); Imai, N. [Center for Nuclear Study, University of Tokyo, RIKEN Campus, 2-1 Hirosawa, Wako, Saitama 351-0298 (Japan); Kobayashi, T. [Department of Physics, Tohoku University, Aoba, Sendai 980-8578 (Japan); Matsushita, M.; Michimasa, S. [Center for Nuclear Study, University of Tokyo, RIKEN Campus, 2-1 Hirosawa, Wako, Saitama 351-0298 (Japan); Otsu, H. [RIKEN Nishina Center, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Shimoura, S. [Center for Nuclear Study, University of Tokyo, RIKEN Campus, 2-1 Hirosawa, Wako, Saitama 351-0298 (Japan); Teranishi, T. [Department of Physics, Kyushu University, 6-10-1 Hakozaki, Fukuoka 812-8581 (Japan)

    2016-06-01

    The {sup 82}Ge beam has been produced by the in-flight fission reaction of the {sup 238}U primary beam with 345 MeV/u at the RIKEN RI beam factory, and slowed down to about 15 MeV/u using the energy degraders. The momentum-compression mode was applied to the second stage of the BigRIPS separator to reduce the momentum spread. The energy was successfully reduced down to 13 ± 2.5 MeV/u as expected. The focus was not optimized at the end of the second stage, therefore the beam size was larger than the expectation. The transmission of the second stage was half of the simulated value mainly due to out of focus. The two-stage separation worked very well for the slowed-down beam with the momentum-compression mode.

  4. THE SLOWING DOWN OF THE CORROSION OF ELEMENTS OF THE EQUIPMENT OF HEAVY MET-ALS AT ELEVATED TEMPERATURES

    OpenAIRE

    Носачова, Юлія Вікторівна; Ярошенко, М. М.; Корзун, А. О.; КОРОВЧЕНКО, К. С.

    2017-01-01

    In this article examined the heavy metals ions and their ability to slow down the corrosion process also the impact of ambient temperature on their effectiveness. Solving the problem of corrosion will reduce the impact of large industrial enterprises on the environment and minimize the economic costs. To do this, plants should create a system without a discharge of waste water that is closed recycling systems, which result is a significant reduction in intake of fresh water from natural sourc...

  5. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk

    Science.gov (United States)

    Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms. PMID:26761792

  6. Detailed resonance absorption calculations with the Monte Carlo code MCNP and collision probability version of the slowing down code ROLAIDS

    International Nuclear Information System (INIS)

    Kruijf, W.J.M. de; Janssen, A.J.

    1994-01-01

    Very accurate Mote Carlo calculations with Monte Carlo Code have been performed to serve as reference for benchmark calculations on resonance absorption by U 238 in a typical PWR pin-cell geometry. Calculations with the energy-pointwise slowing down code calculates the resonance absorption accurately. Calculations with the multigroup discrete ordinates code XSDRN show that accurate results can only be achieved with a very fine energy mesh. (authors). 9 refs., 5 figs., 2 tabs

  7. Contribution to analytical solution of neutron slowing down problem in homogeneous and heterogeneous media; Prilog analitickom resavanju problema usporavanja neutrona u homogenim i heterogenim sredinama

    Energy Technology Data Exchange (ETDEWEB)

    Stefanovic, D B [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1970-07-01

    The objective of this work is to describe the new analytical solution of the neutron slowing down equation for infinite monoatomic media with arbitrary energy dependence of cross section. The solution is obtained by introducing Green slowing down functions instead of starting from slowing down equations directly. The previously used methods for calculation of fission neutron spectra in the reactor cell were numerical. The proposed analytical method was used for calculating the space-energy distribution of fast neutrons and number of neutron reactions in a thermal reactor cell. The role of analytical method in solving the neutron slowing down in reactor physics is to enable understating of the slowing down process and neutron transport. The obtained results could be used as standards for testing the accuracy od approximative and practical methods.

  8. Particle-in-cell studies of fast-ion slowing-down rates in cool tenuous magnetized plasma

    Science.gov (United States)

    Evans, Eugene S.; Cohen, Samuel A.; Welch, Dale R.

    2018-04-01

    We report on 3D-3V particle-in-cell simulations of fast-ion energy-loss rates in a cold, weakly-magnetized, weakly-coupled plasma where the electron gyroradius, ρe, is comparable to or less than the Debye length, λDe, and the fast-ion velocity exceeds the electron thermal velocity, a regime in which the electron response may be impeded. These simulations use explicit algorithms, spatially resolve ρe and λDe, and temporally resolve the electron cyclotron and plasma frequencies. For mono-energetic dilute fast ions with isotropic velocity distributions, these scaling studies of the slowing-down time, τs, versus fast-ion charge are in agreement with unmagnetized slowing-down theory; with an applied magnetic field, no consistent anisotropy between τs in the cross-field and field-parallel directions could be resolved. Scaling the fast-ion charge is confirmed as a viable way to reduce the required computational time for each simulation. The implications of these slowing down processes are described for one magnetic-confinement fusion concept, the small, advanced-fuel, field-reversed configuration device.

  9. Measurement of the Neutron Slowing-Down Time Distribution at 1.46 eV and its Space Dependence in Water

    International Nuclear Information System (INIS)

    Moeller, E.

    1965-12-01

    The use of the time dependent reaction rate method for the measurement of neutron slowing-down time distributions in hydrogen has been analyzed and applied to the case of sloping down in water. Neutrons with energies of about 1 MeV were slowed down, and the time-dependent neutron density at 1.46 eV and its space dependence was measured with a time resolution of 0.042 μs. The results confirm the well known theory for time-dependent slowing down in hydrogen. The space dependence of the distributions is well described by the P 1 -calculations by Claesson

  10. Measurement of the Neutron Slowing-Down Time Distribution at 1.46 eV and its Space Dependence in Water

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, E

    1965-12-15

    The use of the time dependent reaction rate method for the measurement of neutron slowing-down time distributions in hydrogen has been analyzed and applied to the case of sloping down in water. Neutrons with energies of about 1 MeV were slowed down, and the time-dependent neutron density at 1.46 eV and its space dependence was measured with a time resolution of 0.042 {mu}s. The results confirm the well known theory for time-dependent slowing down in hydrogen. The space dependence of the distributions is well described by the P{sub 1}-calculations by Claesson.

  11. Reaction-rate coefficients, high-energy ions slowing-down, and power balance in a tokamak fusion reactor plasma

    International Nuclear Information System (INIS)

    Tone, Tatsuzo

    1978-07-01

    Described are the reactivity coefficient of D-T fusion reaction, slowing-down processes of deuterons injected with high energy and 3.52 MeV alpha particles generated in D-T reaction, and the power balance in a Tokamak reactor plasma. Most of the results were obtained in the first preliminary design of JAERI Experimental Fusion Reactor (JXFR) driven with stationary neutral beam injection. A manual of numerical computation program ''BALTOK'' developed for the calculations is given in the appendix. (auth.)

  12. The Widom-Rowlinson mixture on a sphere: elimination of exponential slowing down at first-order phase transitions

    International Nuclear Information System (INIS)

    Fischer, T; Vink, R L C

    2010-01-01

    Computer simulations of first-order phase transitions using 'standard' toroidal boundary conditions are generally hampered by exponential slowing down. This is partly due to interface formation, and partly due to shape transitions. The latter occur when droplets become large such that they self-interact through the periodic boundaries. On a spherical simulation topology, however, shape transitions are absent. We expect that by using an appropriate bias function, exponential slowing down can be largely eliminated. In this work, these ideas are applied to the two-dimensional Widom-Rowlinson mixture confined to the surface of a sphere. Indeed, on the sphere, we find that the number of Monte Carlo steps needed to sample a first-order phase transition does not increase exponentially with system size, but rather as a power law τ∝V α , with α∼2.5, and V the system area. This is remarkably close to a random walk for which α RW = 2. The benefit of this improved scaling behavior for biased sampling methods, such as the Wang-Landau algorithm, is investigated in detail.

  13. Slowing down of test particles in a plasma (1961); Ralentissement de particules test dans un plasma (1961)

    Energy Technology Data Exchange (ETDEWEB)

    Belayche, P; Chavy, P; Dupoux, M; Salmon, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1961-07-01

    Numerical solution of the Fokker-Planck equation applied to the slowing down of tritons in a deuterium plasma. After the equations and the boundary conditions have been written, some attention is paid to the numerical tricks used to run the problem on a high speed electronic computer. The numerical results thus obtained are then analyzed and as far as possible, mathematically explained. (authors) [French] Resolution numerique de l'equation de Fokker-Planck appliquee au ralentissement de tritons dans un plasma de deuterium. Apres avoir rappele les equations, les conditions aux limites, l'accent est mis sur les artifices numeriques utilises pour traiter le probleme sur une calculatrice a grande vitesse. Les resultats numeriques obtenus sont ensuite analyses et si possible expliques mathematiquement. En particulier ils peuvent se rattacher a ceux obtenus par application directe de la formule de Spitzer. (auteurs)

  14. Spectrum Evolution of Accelerating or Slowing down Soliton at its Propagation in a Medium with Gold Nanorods

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Lysak, Tatiana M.

    2018-04-01

    We investigate both numerically and analytically the spectrum evolution of a novel type soliton - nonlinear chirped accelerating or decelerating soliton - at a femtosecond pulse propagation in a medium containing noble nanoparticles. In our consideration, we take into account one- or two-photon absorption of laser radiation by nanorods, and time-dependent nanorod aspect ratio changing due to their melting or reshaping because of laser energy absorption. The chirped solitons are formed due to the trapping of laser radiation by the nanorods reshaping fronts, if a positive or negative phase-amplitude grating is induced by laser radiation. Accelerating or slowing down chirped soliton formation is accompanied by the soliton spectrum blue or red shift. To prove our numerical results, we derived the approximate analytical law for the spectrum maximum intensity evolution along the propagation coordinate, based on earlier developed approximate analytical solutions for accelerating and decelerating solitons.

  15. Slowing down and stretching DNA with an electrically tunable nanopore in a p–n semiconductor membrane

    International Nuclear Information System (INIS)

    Melnikov, Dmitriy V; Gracheva, Maria E; Leburton, Jean-Pierre

    2012-01-01

    We have studied single-stranded DNA translocation through a semiconductor membrane consisting of doped p and n layers of Si forming a p–n-junction. Using Brownian dynamics simulations of the biomolecule in the self-consistent membrane–electrolyte potential obtained from the Poisson–Nernst–Planck model, we show that while polymer length is extended more than when its motion is constricted only by the physical confinement of the nanopore. The biomolecule elongation is particularly dramatic on the n-side of the membrane where the lateral membrane electric field restricts (focuses) the biomolecule motion more than on the p-side. The latter effect makes our membrane a solid-state analog of the α-hemolysin biochannel. The results indicate that the tunable local electric field inside the membrane can effectively control dynamics of a DNA in the channel to either momentarily trap, slow down or allow the biomolecule to translocate at will. (paper)

  16. Status on Establishing the Feasibility of Lead Slowing Down Spectroscopy for Direct Measurement of Plutonium in Used Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.; Gavron, Victor A.; Devlin, M.; Haight, R. C.; O' Donnell, J. M.; Danon, Yaron; Weltz, Adam; Bonebrake, Eric; Imel, G. R.; Harris, Jason; Beller, Dennis; Hatchett, D.; Droessler, J.

    2012-08-30

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration to study the feasibility of Lead Slowing Down Spectroscopy. This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory assay methods. This paper will present efforts on the development of time-spectral analysis algorithms, fast neutron detector advances, and validation and testing measurements.

  17. Interference effects in angular and spectral distributions of X-ray Transition Radiation from Relativistic Heavy Ions crossing a radiator: Influence of absorption and slowing-down

    Energy Technology Data Exchange (ETDEWEB)

    Fiks, E.I.; Pivovarov, Yu.L.

    2015-07-15

    Theoretical analysis and representative calculations of angular and spectral distributions of X-ray Transition Radiation (XTR) by Relativistic Heavy Ions (RHI) crossing a radiator are presented taking into account both XTR absorption and RHI slowing-down. The calculations are performed for RHI energies of GSI, FAIR, CERN SPS and LHC and demonstrate the influence of XTR photon absorption as well as RHI slowing-down in a radiator on the appearance/disappearance of interference effects in both angular and spectral distributions of XTR.

  18. Measurement and Analysis Plan for Investigation of Spent-Fuel Assay Using Lead Slowing-Down Spectroscopy

    International Nuclear Information System (INIS)

    Smith, Leon E.; Haas, Derek A.; Gavron, Victor A.; Imel, G.R.; Ressler, Jennifer J.; Bowyer, Sonya M.; Danon, Y.; Beller, D.

    2009-01-01

    Under funding from the Department of Energy Office of Nuclear Energy's Materials, Protection, Accounting, and Control for Transmutation (MPACT) program (formerly the Advanced Fuel Cycle Initiative Safeguards Campaign), Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboratory (LANL) are collaborating to study the viability of lead slowing-down spectroscopy (LSDS) for spent-fuel assay. Based on the results of previous simulation studies conducted by PNNL and LANL to estimate potential LSDS performance, a more comprehensive study of LSDS viability has been defined. That study includes benchmarking measurements, development and testing of key enabling instrumentation, and continued study of time-spectra analysis methods. This report satisfies the requirements for a PNNL/LANL deliverable that describes the objectives, plans and contributing organizations for a comprehensive three-year study of LSDS for spent-fuel assay. This deliverable was generated largely during the LSDS workshop held on August 25-26, 2009 at Rensselaer Polytechnic Institute (RPI). The workshop itself was a prominent milestone in the FY09 MPACT project and is also described within this report.

  19. Epigenomic maintenance through dietary intervention can facilitate DNA repair process to slow down the progress of premature aging.

    Science.gov (United States)

    Ghosh, Shampa; Sinha, Jitendra Kumar; Raghunath, Manchala

    2016-09-01

    DNA damage caused by various sources remains one of the most researched topics in the area of aging and neurodegeneration. Increased DNA damage causes premature aging. Aging is plastic and is characterised by the decline in the ability of a cell/organism to maintain genomic stability. Lifespan can be modulated by various interventions like calorie restriction, a balanced diet of macro and micronutrients or supplementation with nutrients/nutrient formulations such as Amalaki rasayana, docosahexaenoic acid, resveratrol, curcumin, etc. Increased levels of DNA damage in the form of double stranded and single stranded breaks are associated with decreased longevity in animal models like WNIN/Ob obese rats. Erroneous DNA repair can result in accumulation of DNA damage products, which in turn result in premature aging disorders such as Hutchinson-Gilford progeria syndrome. Epigenomic studies of the aging process have opened a completely new arena for research and development of drugs and therapeutic agents. We propose here that agents or interventions that can maintain epigenomic stability and facilitate the DNA repair process can slow down the progress of premature aging, if not completely prevent it. © 2016 IUBMB Life, 68(9):717-721, 2016. © 2016 International Union of Biochemistry and Molecular Biology.

  20. Leaf litter traits of invasive species slow down decomposition compared to Spanish natives: a broad phylogenetic comparison.

    Science.gov (United States)

    Godoy, Oscar; Castro-Díez, Pilar; Van Logtestijn, Richard S P; Cornelissen, Johannes H C; Valladares, Fernando

    2010-03-01

    Leaf traits related to the performance of invasive alien species can influence nutrient cycling through litter decomposition. However, there is no consensus yet about whether there are consistent differences in functional leaf traits between invasive and native species that also manifest themselves through their "after life" effects on litter decomposition. When addressing this question it is important to avoid confounding effects of other plant traits related to early phylogenetic divergences and to understand the mechanism underlying the observed results to predict which invasive species will exert larger effects on nutrient cycling. We compared initial leaf litter traits, and their effect on decomposability as tested in standardized incubations, in 19 invasive-native pairs of co-familial species from Spain. They included 12 woody and seven herbaceous alien species representative of the Spanish invasive flora. The predictive power of leaf litter decomposition rates followed the order: growth form > family > status (invasive vs. native) > leaf type. Within species pairs litter decomposition tended to be slower and more dependent on N and P in invaders than in natives. This difference was likely driven by the higher lignin content of invader leaves. Although our study has the limitation of not representing the natural conditions from each invaded community, it suggests a potential slowing down of the nutrient cycle at ecosystem scale upon invasion.

  1. Charging of insulators by multiply-charged-ion impact probed by slowing down of fast binary-encounter electrons

    Science.gov (United States)

    de Filippo, E.; Lanzanó, G.; Amorini, F.; Cardella, G.; Geraci, E.; Grassi, L.; La Guidara, E.; Lombardo, I.; Politi, G.; Rizzo, F.; Russotto, P.; Volant, C.; Hagmann, S.; Rothard, H.

    2010-12-01

    The interaction of ion beams with insulators leads to charging-up phenomena, which at present are under investigation in connection with guiding phenomena in nanocapillaries with possible application in nanofocused beams. We studied the charging dynamics of insulating foil targets [Mylar, polypropylene (PP)] irradiated with swift ion beams (C, O, Ag, and Xe at 40, 23, 40, and 30 MeV/u, respectively) via the measurement of the slowing down of fast binary-encounter electrons. Also, sandwich targets (Mylar covered with a thin Au layer on both surfaces) and Mylar with Au on only one surface were used. Fast-electron spectra were measured by the time-of-flight method at the superconducting cyclotron of Laboratori Nazionali del Sud (LNS) Catania. The charge buildup leads to target-material-dependent potentials of the order of 6.0 kV for Mylar and 2.8 kV for PP. The sandwich targets, surprisingly, show the same behavior as the insulating targets, whereas a single Au layer on the electron and ion exit side strongly suppresses the charging phenomenon. The accumulated number of projectiles needed for charging up is inversely proportional to electronic energy loss. Thus, the charging up is directly related to emission of secondary electrons.

  2. Charging of insulators by multiply-charged-ion impact probed by slowing down of fast binary-encounter electrons

    International Nuclear Information System (INIS)

    De Filippo, E.; Lanzano, G.; Cardella, G.; Amorini, F.; Geraci, E.; Grassi, L.; Politi, G.; La Guidara, E.; Lombardo, I.; Rizzo, F.; Russotto, P.; Volant, C.; Hagmann, S.; Rothard, H.

    2010-01-01

    The interaction of ion beams with insulators leads to charging-up phenomena, which at present are under investigation in connection with guiding phenomena in nanocapillaries with possible application in nanofocused beams. We studied the charging dynamics of insulating foil targets [Mylar, polypropylene (PP)] irradiated with swift ion beams (C, O, Ag, and Xe at 40, 23, 40, and 30 MeV/u, respectively) via the measurement of the slowing down of fast binary-encounter electrons. Also, sandwich targets (Mylar covered with a thin Au layer on both surfaces) and Mylar with Au on only one surface were used. Fast-electron spectra were measured by the time-of-flight method at the superconducting cyclotron of Laboratori Nazionali del Sud (LNS) Catania. The charge buildup leads to target-material-dependent potentials of the order of 6.0 kV for Mylar and 2.8 kV for PP. The sandwich targets, surprisingly, show the same behavior as the insulating targets, whereas a single Au layer on the electron and ion exit side strongly suppresses the charging phenomenon. The accumulated number of projectiles needed for charging up is inversely proportional to electronic energy loss. Thus, the charging up is directly related to emission of secondary electrons.

  3. MicroRNA-124 slows down the progression of Huntington′s disease by promoting neurogenesis in the striatum

    Directory of Open Access Journals (Sweden)

    Tian Liu

    2015-01-01

    Full Text Available MicroRNA-124 contributes to neurogenesis through regulating its targets, but its expression both in the brain of Huntington′s disease mouse models and patients is decreased. However, the effects of microRNA-124 on the progression of Huntington′s disease have not been reported. Results from this study showed that microRNA-124 increased the latency to fall for each R6/2 Huntington′s disease transgenic mouse in the rotarod test. 5-Bromo-2′-deoxyuridine (BrdU staining of the striatum shows an increase in neurogenesis. In addition, brain-derived neurotrophic factor and peroxisome proliferator-activated receptor gamma coactivator 1-alpha protein levels in the striatum were increased and SRY-related HMG box transcription factor 9 protein level was decreased. These findings suggest that microRNA-124 slows down the progression of Huntington′s disease possibly through its important role in neuronal differentiation and survival.

  4. When high-capacity readers slow down and low-capacity readers speed up: Working memory and locality effects

    Directory of Open Access Journals (Sweden)

    Bruno eNicenboim

    2016-03-01

    Full Text Available We examined the effects of argument-head distance in SVO and SOV languages (Spanish and German, while taking into account readers’ working memory capacity and controlling for expectation (Levy, 2008 and other factors. We predicted only locality effects, that is, a slow-down produced by increased dependency distance (Gibson, 2000; Lewis & Vasishth, 2005. Furthermore, we expected stronger locality effects for readers with low working memory capacity. Contrary to our predictions, low-capacity readers showed faster reading with increased distance, while high-capacity readers showed locality effects. We suggest that while the locality effects are compatible with memory-based explanations, the speedup of low-capacity readers can be explained by an increased probability of retrieval failure. We present a computational model based on ACT-R built under the previous assumptions, which is able to give a qualitative account for the present data and can be tested in future research. Our results suggest that in some cases, interpreting longer RTs as indexing increased processing difficulty and shorter RTs as facilitation may be too simplistic: The same increase in processing difficulty may lead to slowdowns in high-capacity readers and speedups in low-capacity ones. Ignoring individual level capacity differences when investigating locality effects may lead to misleading conclusions.

  5. FOXO/DAF-16 Activation Slows Down Turnover of the Majority of Proteins in C. elegans.

    Science.gov (United States)

    Dhondt, Ineke; Petyuk, Vladislav A; Cai, Huaihan; Vandemeulebroucke, Lieselot; Vierstraete, Andy; Smith, Richard D; Depuydt, Geert; Braeckman, Bart P

    2016-09-13

    Most aging hypotheses assume the accumulation of damage, resulting in gradual physiological decline and, ultimately, death. Avoiding protein damage accumulation by enhanced turnover should slow down the aging process and extend the lifespan. However, lowering translational efficiency extends rather than shortens the lifespan in C. elegans. We studied turnover of individual proteins in the long-lived daf-2 mutant by combining SILeNCe (stable isotope labeling by nitrogen in Caenorhabditiselegans) and mass spectrometry. Intriguingly, the majority of proteins displayed prolonged half-lives in daf-2, whereas others remained unchanged, signifying that longevity is not supported by high protein turnover. This slowdown was most prominent for translation-related and mitochondrial proteins. In contrast, the high turnover of lysosomal hydrolases and very low turnover of cytoskeletal proteins remained largely unchanged. The slowdown of protein dynamics and decreased abundance of the translational machinery may point to the importance of anabolic attenuation in lifespan extension, as suggested by the hyperfunction theory. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  6. FOXO/DAF-16 Activation Slows Down Turnover of the Majority of Proteins in C. elegans

    Directory of Open Access Journals (Sweden)

    Ineke Dhondt

    2016-09-01

    Full Text Available Most aging hypotheses assume the accumulation of damage, resulting in gradual physiological decline and, ultimately, death. Avoiding protein damage accumulation by enhanced turnover should slow down the aging process and extend the lifespan. However, lowering translational efficiency extends rather than shortens the lifespan in C. elegans. We studied turnover of individual proteins in the long-lived daf-2 mutant by combining SILeNCe (stable isotope labeling by nitrogen in Caenorhabditis elegans and mass spectrometry. Intriguingly, the majority of proteins displayed prolonged half-lives in daf-2, whereas others remained unchanged, signifying that longevity is not supported by high protein turnover. This slowdown was most prominent for translation-related and mitochondrial proteins. In contrast, the high turnover of lysosomal hydrolases and very low turnover of cytoskeletal proteins remained largely unchanged. The slowdown of protein dynamics and decreased abundance of the translational machinery may point to the importance of anabolic attenuation in lifespan extension, as suggested by the hyperfunction theory.

  7. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  8. Therapeutic dosages of aspirin counteract the IL-6 induced pro-tumorigenic effects by slowing down the ribosome biogenesis rate

    Science.gov (United States)

    Brighenti, Elisa; Giannone, Ferdinando Antonino; Fornari, Francesca; Onofrillo, Carmine; Govoni, Marzia; Montanaro, Lorenzo; Treré, Davide; Derenzini, Massimo

    2016-01-01

    Chronic inflammation is a risk factor for the onset of cancer and the regular use of aspirin reduces the risk of cancer development. Here we showed that therapeutic dosages of aspirin counteract the pro-tumorigenic effects of the inflammatory cytokine interleukin(IL)-6 in cancer and non-cancer cell lines, and in mouse liver in vivo. We found that therapeutic dosages of aspirin prevented IL-6 from inducing the down-regulation of p53 expression and the acquisition of the epithelial mesenchymal transition (EMT) phenotypic changes in the cell lines. This was the result of a reduction in c-Myc mRNA transcription which was responsible for a down-regulation of the ribosomal protein S6 expression which, in turn, slowed down the rRNA maturation process, thus reducing the ribosome biogenesis rate. The perturbation of ribosome biogenesis hindered the Mdm2-mediated proteasomal degradation of p53, throughout the ribosomal protein-Mdm2-p53 pathway. P53 stabilization hindered the IL-6 induction of the EMT changes. The same effects were observed in livers from mice stimulated with IL-6 and treated with aspirin. It is worth noting that aspirin down-regulated ribosome biogenesis, stabilized p53 and up-regulated E-cadherin expression in unstimulated control cells also. In conclusion, these data showed that therapeutic dosages of aspirin increase the p53-mediated tumor-suppressor activity of the cells thus being in this way able to reduce the risk of cancer onset, either or not linked to chronic inflammatory processes. PMID:27557515

  9. Therapeutic dosages of aspirin counteract the IL-6 induced pro-tumorigenic effects by slowing down the ribosome biogenesis rate.

    Science.gov (United States)

    Brighenti, Elisa; Giannone, Ferdinando Antonino; Fornari, Francesca; Onofrillo, Carmine; Govoni, Marzia; Montanaro, Lorenzo; Treré, Davide; Derenzini, Massimo

    2016-09-27

    Chronic inflammation is a risk factor for the onset of cancer and the regular use of aspirin reduces the risk of cancer development. Here we showed that therapeutic dosages of aspirin counteract the pro-tumorigenic effects of the inflammatory cytokine interleukin(IL)-6 in cancer and non-cancer cell lines, and in mouse liver in vivo. We found that therapeutic dosages of aspirin prevented IL-6 from inducing the down-regulation of p53 expression and the acquisition of the epithelial mesenchymal transition (EMT) phenotypic changes in the cell lines. This was the result of a reduction in c-Myc mRNA transcription which was responsible for a down-regulation of the ribosomal protein S6 expression which, in turn, slowed down the rRNA maturation process, thus reducing the ribosome biogenesis rate. The perturbation of ribosome biogenesis hindered the Mdm2-mediated proteasomal degradation of p53, throughout the ribosomal protein-Mdm2-p53 pathway. P53 stabilization hindered the IL-6 induction of the EMT changes. The same effects were observed in livers from mice stimulated with IL-6 and treated with aspirin. It is worth noting that aspirin down-regulated ribosome biogenesis, stabilized p53 and up-regulated E-cadherin expression in unstimulated control cells also. In conclusion, these data showed that therapeutic dosages of aspirin increase the p53-mediated tumor-suppressor activity of the cells thus being in this way able to reduce the risk of cancer onset, either or not linked to chronic inflammatory processes.

  10. Neutron slowing down and transport in monoisotopic media with constant cross sections or with a square-well minimum

    International Nuclear Information System (INIS)

    Peng, W.H.

    1977-01-01

    A specialized moments-method computer code was constructed for the calculation of the even spatial moments of the scalar flux, phi/sub 2n/, through 2n = 80. Neutron slowing-down and transport in a medium with constant cross sections was examined and the effect of a superimposed square-well cross section minimum on the penetrating flux was studied. In the constant cross section case, for nuclei that are not too light, the scalar flux is essentially independent of the nuclide mass. The numerical results obtained were used to test the validity of existing analytic approximations to the flux at both small and large lethargies relative to the source energy. As a result it was possible to define the regions in the lethargy--distance plane where these analytic solutions apply with reasonable accuracy. A parametric study was made of the effect of a square-well cross section minimum on neutron fluxes at energies below the minimum. It was shown that the flux at energies well below the minimum is essentially independent of the position of the minimum in lethargy. The results can be described by a convolution-of-sources model involving only the lethargy separation between detector and source, the width and the relative depth of the minimum. On the basis of the computations and the corresponding model, it is possible to predict, e.g., the conditions under which transport in the region of minimum completely determines the penetrating flux. At the other extreme, the model describes when the transport in the minimum can be treated in the same manner as in any comparable lethargy interval. With the aid of these criteria it is possible to understand the apparent paradoxical effects of certain minima in neutron penetration through such media as iron and sodium

  11. "You can save time if…"-A qualitative study on internal factors slowing down clinical trials in Sub-Saharan Africa.

    Directory of Open Access Journals (Sweden)

    Nerina Vischer

    Full Text Available The costs, complexity, legal requirements and number of amendments associated with clinical trials are rising constantly, which negatively affects the efficient conduct of trials. In Sub-Saharan Africa, this situation is exacerbated by capacity and funding limitations, which further increase the workload of clinical trialists. At the same time, trials are critically important for improving public health in these settings. The aim of this study was to identify the internal factors that slow down clinical trials in Sub-Saharan Africa. Here, factors are limited to those that exclusively relate to clinical trial teams and sponsors. These factors may be influenced independently of external conditions and may significantly increase trial efficiency if addressed by the respective teams.We conducted sixty key informant interviews with clinical trial staff working in different positions in two clinical research centres in Kenya, Ghana, Burkina Faso and Senegal. The study covered English- and French-speaking, and Eastern and Western parts of Sub-Saharan Africa. We performed thematic analysis of the interview transcripts.We found various internal factors associated with slowing down clinical trials; these were summarised into two broad themes, "planning" and "site organisation". These themes were consistently mentioned across positions and countries. "Planning" factors related to budget feasibility, clear project ideas, realistic deadlines, understanding of trial processes, adaptation to the local context and involvement of site staff in planning. "Site organisation" factors covered staff turnover, employment conditions, career paths, workload, delegation and management.We found that internal factors slowing down clinical trials are of high importance to trial staff. Our data suggest that adequate and coherent planning, careful assessment of the setting, clear task allocation and management capacity strengthening may help to overcome the identified

  12. Slowing-down calculation for charged particles, application to the calculation of the (alpha, neutron) reaction yield in UO2 - PuO2 fuel

    International Nuclear Information System (INIS)

    Dulieu, P.

    1967-11-01

    There are no complete theory nor experimental data sufficient to predict exactly, in a systemic way, the slowing down power of any medium for any ion with any energy. However, in each case, the energy range can be divided in three areas, the low energiy range where the de/dx is an ascending energy function, the intermediate energy region where de/dx has a maximum, the high energy region where de/dx is a descending energy function. In practice, the code Irma 3 allows to obtain with a good precision de/dx for the protons, neutrons, tritons, alphas in any medium. For particles heavier than alpha it is better to use specific methods. In the case of calculating the yield of the (alpha, neutron) reaction in a UO 2 -PuO 2 fuel cell, the divergences of experimental origin, between the existing data lead to adopt a range a factor 1.7 on the yields [fr

  13. GORGON - a computer code for the calculation of energy deposition and the slowing down of ions in cold materials and hot dense plasmas

    International Nuclear Information System (INIS)

    Long, K.A.; Moritz, N.; Tahir, N.A.

    1983-11-01

    The computer code GORGON, which calculates the energy deposition and slowing down of ions in cold materials and hot plasmas is described, and analyzed in this report. This code is in a state of continuous development but an intermediate stage has been reached where it is considered useful to document the 'state of the art' at the present time. The GORGON code is an improved version of a code developed by Zinamon et al. as part of a more complex program system for studying the hydrodynamic motion of plane metal targets irradiated by intense beams of protons. The improvements made in the code were necessary to improve its usefulness for problems related to the design and burn of heavy ion beam driven inertial confinement fusion targets. (orig./GG) [de

  14. Significant change in the construction of a door to a room with slowed down neutron field by means of commonly used inexpensive protective materials

    International Nuclear Information System (INIS)

    Konefal, Adam; Laciak, Marcin; Dawidowska, Anna; Osewski, Wojciech

    2014-01-01

    The detailed analysis of nuclear reactions occurring in materials of the door is presented for the typical construction of an entrance door to a room with a slowed down neutron field. The changes in the construction of the door were determined to reduce effectively the level of neutron and gamma radiation in the vicinity of the door in a room adjoining the neutron field room. Optimisation of the door construction was performed with the use of Monte Carlo calculations (GEANT4). The construction proposed in this paper bases on the commonly used inexpensive protective materials such as borax (13.4 cm), lead (4 cm) and stainless steel (0.1 and 0.5 cm on the side of the neutron field room and of the adjoining room, respectively). The improved construction of the door, worked out in the presented studies, can be an effective protection against neutrons with energies up to 1 MeV (authors)

  15. Effect of the size of experimental channels of the lead slowing-down spectrometer SVZ-100 (Institute for Nuclear Research, Moscow) on the moderation constant

    Energy Technology Data Exchange (ETDEWEB)

    Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M., E-mail: sobolevs@inr.ru [Russian Academy of Sciences, Institute for Nuclear Research (Russian Federation); Ilic, R. D. [Vinca Institute of Nuclear Sciences (Serbia)

    2013-04-15

    Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 10{sup 3} to 10{sup 4} times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts-in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.

  16. Effect of the size of experimental channels of the lead slowing-down spectrometer SVZ-100 (Institute for Nuclear Research, Moscow) on the moderation constant

    International Nuclear Information System (INIS)

    Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M.; Ilić, R. D.

    2013-01-01

    Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 10 3 to 10 4 times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts—in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.

  17. Information slows down hierarchy growth.

    Science.gov (United States)

    Czaplicka, Agnieszka; Suchecki, Krzysztof; Miñano, Borja; Trias, Miquel; Hołyst, Janusz A

    2014-06-01

    We consider models of growing multilevel systems wherein the growth process is driven by rules of tournament selection. A system can be conceived as an evolving tree with a new node being attached to a contestant node at the best hierarchy level (a level nearest to the tree root). The proposed evolution reflects limited information on system properties available to new nodes. It can also be expressed in terms of population dynamics. Two models are considered: a constant tournament (CT) model wherein the number of tournament participants is constant throughout system evolution, and a proportional tournament (PT) model where this number increases proportionally to the growing size of the system itself. The results of analytical calculations based on a rate equation fit well to numerical simulations for both models. In the CT model all hierarchy levels emerge, but the birth time of a consecutive hierarchy level increases exponentially or faster for each new level. The number of nodes at the first hierarchy level grows logarithmically in time, while the size of the last, "worst" hierarchy level oscillates quasi-log-periodically. In the PT model, the occupations of the first two hierarchy levels increase linearly, but worse hierarchy levels either do not emerge at all or appear only by chance in the early stage of system evolution to further stop growing at all. The results allow us to conclude that information available to each new node in tournament dynamics restrains the emergence of new hierarchy levels and that it is the absolute amount of information, not relative, which governs such behavior.

  18. Slowing down with resonance absorption

    International Nuclear Information System (INIS)

    Moura Neto, C. de; Nair, R.P.K.

    1979-08-01

    The presence of heavy nuclei in nuclear reactors, in significant concentrations, facilitates the appearance of absorption resonances. For the moderation in the presence of absorbers an exact solution of the integral equations is possible by numerical methods. Approximated solutions for separated resonances in function of the practical width, (NR and NRIM approximations) are discussed in this paper. The method is generalized, presenting the solution by an intermediate approximation, in the definition of the resonance integral. (Author) [pt

  19. Information slows down hierarchy growth

    Science.gov (United States)

    Czaplicka, Agnieszka; Suchecki, Krzysztof; Miñano, Borja; Trias, Miquel; Hołyst, Janusz A.

    2014-06-01

    We consider models of growing multilevel systems wherein the growth process is driven by rules of tournament selection. A system can be conceived as an evolving tree with a new node being attached to a contestant node at the best hierarchy level (a level nearest to the tree root). The proposed evolution reflects limited information on system properties available to new nodes. It can also be expressed in terms of population dynamics. Two models are considered: a constant tournament (CT) model wherein the number of tournament participants is constant throughout system evolution, and a proportional tournament (PT) model where this number increases proportionally to the growing size of the system itself. The results of analytical calculations based on a rate equation fit well to numerical simulations for both models. In the CT model all hierarchy levels emerge, but the birth time of a consecutive hierarchy level increases exponentially or faster for each new level. The number of nodes at the first hierarchy level grows logarithmically in time, while the size of the last, "worst" hierarchy level oscillates quasi-log-periodically. In the PT model, the occupations of the first two hierarchy levels increase linearly, but worse hierarchy levels either do not emerge at all or appear only by chance in the early stage of system evolution to further stop growing at all. The results allow us to conclude that information available to each new node in tournament dynamics restrains the emergence of new hierarchy levels and that it is the absolute amount of information, not relative, which governs such behavior.

  20. Expression of CD73 slows down migration of skin dendritic cells, affecting the sensitization phase of contact hypersensitivity reactions in mice.

    Science.gov (United States)

    Neuberger, A; Ring, S; Silva-Vilches, C; Schrader, J; Enk, A; Mahnke, K

    2017-09-01

    Application of haptens to the skin induces release of immune stimulatory ATP into the extracellular space. This "danger" signal can be converted to immunosuppressive adenosine (ADO) by the action of the ectonucleotidases CD39 and CD73, expressed by skin and immune cells. Thus, the expression and regulation of CD73 by skin derived cells may have crucial influence on the outcome of contact hypersensitivity (CHS) reactions. To investigate the role of CD73 expression during 2,4,6-trinitrochlorobenzene (TNCB) induced CHS reactions. Wild type (wt) and CD73 deficient mice were subjected to TNCB induced CHS. In the different mouse strains the resulting ear swelling reaction was recorded along with a detailed phenotypic analysis of the skin migrating subsets of dendritic cells (DC). In CD73 deficient animals the motility of DC was higher as compared to wt animals and in particular after sensitization we found increased migration of Langerin + DC from skin to draining lymph nodes (LN). In the TNCB model this led to a stronger sensitization as indicated by increased frequency of interferon-γ producing T cells in the LN and an increased ear thickness after challenge. CD73 derived ADO production slows down migration of Langerin + DC from skin to LN. This may be a crucial mechanism to avoid over boarding immune reactions against haptens. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.

  1. Experimental assessment of the performance of a proposed lead slowing-down spectrometer at WNR/PSR [Weapons Neutron Research/Proton Storage Ring

    International Nuclear Information System (INIS)

    Moore, M.S.; Koehler, P.E.; Michaudon, A.; Schelberg, A.; Danon, Y.; Block, R.C.; Slovacek, R.E.; Hoff, R.W.; Lougheed, R.W.

    1990-01-01

    In November 1989, we carried out a measurement of the fission cross section of 247 Cm, 250 Cf, and 254 Es on the Rensselaer Intense Neutron Source (RINS) at Rensselaer Polytechnic Institute (RPI). In July 1990, we carried out a second measurement, using the same fission chamber and electronics, in beam geometry at the Los Alamos Neutron Scattering Center (LANSCE) facility. Using the relative count rates observed in the two experiments, and the flux-enhancement factors determined by the RPI group for a lead slowing-down spectrometer compared to beam geometry, we can assess the performance of a spectrometer similar to RINS, driven by the Proton Storage Ring (PSR) at the Los Alamos National Laboratory. With such a spectrometer, we find that is is feasible to make measurements with samples of 1 ng for fission 1 μg for capture, and of isotopes with half-lives of tens of minutes. It is important to note that, while a significant amount of information can be obtained from the low resolution RINS measurement, a definitive determination of average properties, including the level density, requires that the resonance structure be resolved. 12 refs., 5 figs., 3 tabs

  2. Experimental assessment of the performance of a proposed lead slowing-down spectrometer at WNR/PSR (Weapons Neutron Research/Proton Storage Ring)

    Energy Technology Data Exchange (ETDEWEB)

    Moore, M.S.; Koehler, P.E.; Michaudon, A.; Schelberg, A. (Los Alamos National Lab., NM (USA)); Danon, Y.; Block, R.C.; Slovacek, R.E. (Rensselaer Polytechnic Inst., Troy, NY (USA)); Hoff, R.W.; Lougheed, R.W. (Lawrence Livermore National Lab., CA (USA))

    1990-01-01

    In November 1989, we carried out a measurement of the fission cross section of {sup 247}Cm, {sup 250}Cf, and {sup 254}Es on the Rensselaer Intense Neutron Source (RINS) at Rensselaer Polytechnic Institute (RPI). In July 1990, we carried out a second measurement, using the same fission chamber and electronics, in beam geometry at the Los Alamos Neutron Scattering Center (LANSCE) facility. Using the relative count rates observed in the two experiments, and the flux-enhancement factors determined by the RPI group for a lead slowing-down spectrometer compared to beam geometry, we can assess the performance of a spectrometer similar to RINS, driven by the Proton Storage Ring (PSR) at the Los Alamos National Laboratory. With such a spectrometer, we find that is is feasible to make measurements with samples of 1 ng for fission 1 {mu}g for capture, and of isotopes with half-lives of tens of minutes. It is important to note that, while a significant amount of information can be obtained from the low resolution RINS measurement, a definitive determination of average properties, including the level density, requires that the resonance structure be resolved. 12 refs., 5 figs., 3 tabs.

  3. Application of manure containing tetracyclines slowed down the dissipation of tet resistance genes and caused changes in the composition of soil bacteria.

    Science.gov (United States)

    Xiong, Wenguang; Wang, Mei; Dai, Jinjun; Sun, Yongxue; Zeng, Zhenling

    2018-01-01

    Manure application contributes to the increased environmental burden of antibiotic resistance genes (ARGs). We investigated the response of tetracycline (tet) resistance genes and bacterial taxa to manure application amended with tetracyclines over two months. Representative tetracyclines (oxytetracycline, chlorotetracycline and doxycycline), tet resistance genes (tet(M), tet(O), tet(W), tet(S), tet(Q) and tet(X)) and bacterial taxa in the untreated soil, +manure, and +manure+tetracyclines groups were analyzed. The abundances of all tet resistance genes in the +manure group were significantly higher than those in the untreated soil group on day 1. The abundances of all tet resistance genes (except tet(Q) and tet(X)) were significantly lower in the +manure group than those in the +manure+tetracyclines group on day 30 and 60. The dissipation rates were higher in the +manure group than those in the +manure+tetracyclines group. Disturbance of soil bacterial community composition imposed by tetracyclines was also observed. The results indicated that tetracyclines slowed down the dissipation of tet resistance genes in arable soil after manure application. Application of manure amended with tetracyclines may provide a significant selective advantage for species affiliated to the taxonomical families of Micromonosporaceae, Propionibacteriaceae, Streptomycetaceae, Nitrospiraceae and Clostridiaceae. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Penetration of Hydrogen clusters from 10 to 120 kev/u in carbon foils. Study of their slowing-down and charge distribution of emerging fragments

    International Nuclear Information System (INIS)

    Ray, E.M.

    1991-06-01

    This work is devoted to the experimental study of the interaction between fast (10 to 120 keV/p) hydrogen clusters with thin solid targets. First, we have studied the slowing-down of H n + (2≤n≤21) clusters through carbon foils. Up to date this had been made only with molecular ions. We obtain evidence for vicinage effects on the energy loss of proton-clusters. We show that for projectile energies larger than 50 keV/p, the energy loss of a proton in a cluster is enhanced when compared to that of an isolated proton of the same velocity. At lower incident energies, it is a decrease of the energy loss which is observed. The same effect is also observed in the energy lost in the entrance window of a surface barrier detector bombarded by clusters. This phenomenon is interpreted in terms of interferences between individual polarisation wakes induced by each proton of the cluster. In the second part, we propose an accurate method to study the charge state of the atomic fragments resulting from the dissociation of fast H n + (2≤n≤15) clusters through a carbon foil. This method gives also the distribution of the neutral atoms among the emerging fragments. These distributions are finally compared with binomial laws expected from independent particles

  5. Revealing the Formation Mechanism of CsPbBr3 Perovskite Nanocrystals Produced via a Slowed-Down Microwave-Assisted Synthesis.

    Science.gov (United States)

    Li, Yanxiu; Huang, He; Xiong, Yuan; Kershaw, Stephen V; Rogach, Andrey L

    2018-03-24

    We developed a microwave-assisted slowed-down synthesis of CsPbBr 3 perovskite nanocrystals, which retards the reaction and allows us to gather useful insights into the formation mechanism of these nanoparticles, by examining the intermediate stages of their growth. The trends in the decay of the emission intensity of CsPbBr 3 nanocrystals under light exposure are well correlated with their stability against decomposition in TEM under electron beam. The results show the change of the crystal structure of CsPbBr 3 nanocrystals from a deficient and easier to be destroyed lattice to a well crystallized one. Conversely the shift in the ease of degradation sheds light on the formation mechanism, indicating first the formation of a bromoplumbate ionic scaffold, with Cs-ion infilling lagging a little behind. Increasing the cation to halide ratio towards the stoichiometric level may account for the improved radiative recombination rates observed in the longer reaction time materials. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Significant change in the construction of a door to a room with slowed down neutron field by means of commonly used inexpensive protective materials.

    Science.gov (United States)

    Konefał, Adam; Łaciak, Marcin; Dawidowska, Anna; Osewski, Wojciech

    2014-12-01

    The detailed analysis of nuclear reactions occurring in materials of the door is presented for the typical construction of an entrance door to a room with a slowed down neutron field. The changes in the construction of the door were determined to reduce effectively the level of neutron and gamma radiation in the vicinity of the door in a room adjoining the neutron field room. Optimisation of the door construction was performed with the use of Monte Carlo calculations (GEANT4). The construction proposed in this paper bases on the commonly used inexpensive protective materials such as borax (13.4 cm), lead (4 cm) and stainless steel (0.1 and 0.5 cm on the side of the neutron field room and of the adjoining room, respectively). The improved construction of the door, worked out in the presented studies, can be an effective protection against neutrons with energies up to 1 MeV. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  8. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  10. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  11. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  12. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  13. Monte-Carlo method for studying the slowing down of neutrons in a thin plate of hydrogenated matter; Methode de Monte-Carlo pour l'etude du ralentissement des neutrons dans une plaque mince de matiere hydrogenee

    Energy Technology Data Exchange (ETDEWEB)

    Ribon, P; Michaudon, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1965-07-01

    The studies of interaction of slow neutrons with atomic nuclei by means of the time of flight methods are made with a pulsed neutron source with a broad energy spectrum. The measurement accuracy needs a high intensity and an output time as short as possible and well defined. If the neutrons source is a target bombarded by the beam of a pulsed accelerator, it is usually required to slow down the neutrons to obtain a sufficient intensity at low energies. The purpose of the Monte-Carlo method which is described in this paper is to study the slowing down properties, mainly the intensity and the output time distribution of the slowed-down neutrons. The choice of the method and parameters studied is explained as well as the principles, some calculations and the program organization. A few results given as examples were obtained in the line of this program, the limits of which are principally due to simplifying physical hypotheses. (author) [French] l'etude de l'interaction des neutrons lents avec les noyaux atomiques par la methode du temps de vol s'effectue avec une source pulsee de neutrons dont le spectre en energie est assez etendu. La precision des mesures demande que la source soit intense et que la duree d'emission des neutrons soit breve et bien definie. Si la source est une cible bombardee par le faisceau de particules d'un accelerateur pulse, il est generalement indispensable de ralentir les neutrons pour avoir une intensite suffisante a basse energie. Nous presentons ici une methode de Monte-Carlo pour l'etude detaillee de ce ralentissement, notamment l'intensite et la distribution des temps de sortie des neutrons ralentis. Cette presentation comprend: la justification du choix de la methode de Monte-Carlo, les principes generaux, les differentes etapes du calcul et du programme ecrit pour le calculateur electronique IBM 7090. Nous indiquons aussi les restrictions qui sont apportees au domaine d'application de ce programme et qui proviennent surtout des

  14. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  15. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  16. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  17. Some comments on cold hydrogenous moderators, simple synthetic kernels and benchmark calculations

    International Nuclear Information System (INIS)

    Dorning, J.

    1997-09-01

    The author comments on three general subjects which are not directly related, but which in his opinion are very relevant to the objectives of the workshop. The first of these is parahydrogen moderators, about which recurring questions have been raised during the Workshop. The second topic is related to the use of simple synthetic scattering kernels in conjunction with the neutron transport equation to carry out elementary mathematical analyses and simple computational analyses in order to understand the gross physics of time-dependent neutron transport initiated by pulsed sources in cold moderators. The third subject is that of 'simple' benchmark calculations by which is meant calculations that are simple compared to the very large scale combined spallation, slowing-down, thermalization calculations using MCNP and other large Monte Carlo codes. Such benchmark problems can be created so that they are closely related to both the geometric configuration and material composition of cold moderators of interest and still can be solved using steady-state deterministic transport codes to calculate the asymptotic time-decay constant, and the time-asymptotic energy spectrum of neutrons in the cold moderator and the spectrum of the cold neutrons leaking from it (neither of which should be expected to be Maxwellian in these small leakage-dominated systems). These would provide rather precise benchmark solutions against which the results of the large scale calculations carried out for the whole spallation, slowing-down, thermalization system -- for the same decoupled cold moderator -- could be compared.

  18. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  19. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  20. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  1. On the theory of slowing down gracefully

    Indian Academy of Sciences (India)

    2012-06-08

    Jun 8, 2012 ... Our analysis of this phenomenon provides a modest ... observed degrees of freedom of a medium such as the electromagnetic field in an optically dense material, sound waves in a crystal or the Goldstone modes of an atom ...

  2. Words can slow down category learning.

    Science.gov (United States)

    Brojde, Chandra L; Porter, Chelsea; Colunga, Eliana

    2011-08-01

    Words have been shown to influence many cognitive tasks, including category learning. Most demonstrations of these effects have focused on instances in which words facilitate performance. One possibility is that words augment representations, predicting an across the-board benefit of words during category learning. We propose that words shift attention to dimensions that have been historically predictive in similar contexts. Under this account, there should be cases in which words are detrimental to performance. The results from two experiments show that words impair learning of object categories under some conditions. Experiment 1 shows that words hurt performance when learning to categorize by texture. Experiment 2 shows that words also hurt when learning to categorize by brightness, leading to selectively attending to shape when both shape and hue could be used to correctly categorize stimuli. We suggest that both the positive and negative effects of words have developmental origins in the history of word usage while learning categories. [corrected

  3. Neutron slowing down in the resonance region

    International Nuclear Information System (INIS)

    Matausek, M.V.

    1971-01-01

    This paper describes the procedure for solving space, lethargy and angular dependent transport equation for resonant neutrons in cylindrical infinite reactor lattice cell. The procedure is suitable for practical application on its own or in combination with some more complex procedure

  4. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  5. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  6. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  7. Kernel structures for Clouds

    Science.gov (United States)

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  8. Study and industrial applications of the external slowing-down {beta}{sup -} radiation of the yttrium - 90; Etude et applications industrielles du rayonnement de freinage externe des {beta}{sup -} de l'yttrium - 90

    Energy Technology Data Exchange (ETDEWEB)

    Leveque, P; Martinelli, P; Chauvin, R [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1955-07-01

    Inelastic scattering of the {beta}{sup -} particles on the nucleus gives place to the emission of a X-ray Bremsstrahlung radiation. In view of possible industrial applications, we studied the slowing-down radiation of {sup 90}(Sr + Y) sources in various materials. This pure {beta}{sup -} emitter of long period is in the fission products of uranium. Among of the industrial applications, these sources of weak X-rays energy can be used for the radiography of thin pieces, for measuring the thickness, or for the analysis by fluorescence. (M.B.) [French] La diffusion inelastique des particules {beta}{sup -} sur les noyaux donne lieu a l'emission d'un rayonnement X de freinage. En vue de possibles applications industrielles, nous avons etudie le rayonnement de freinage des sources {sup 90}(Sr + Y) dans divers materiaux. Cet emetteur {beta}{sup -} pur a longue periode se trouve dans les produits de fission de l'uranium. Parmi les applications industrielles a l'etude, ces sources de rayons X de faible energie peuvent etre utilisees pour la radiographie de pieces minces, la mesure d'epaisseurs, ou encore pour l'analyse par fluorescence. (M.B.)

  9. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  10. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  11. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  12. Steerability of Hermite Kernel

    Czech Academy of Sciences Publication Activity Database

    Yang, Bo; Flusser, Jan; Suk, Tomáš

    2013-01-01

    Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf

  13. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  14. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  15. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  16. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  17. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  18. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...

  19. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  20. Measurements and Calculations of the Slowing-Down and Migration Time; Mesures et Calcul du Temps de Ralentissement et de Migration; Izmereniya i raschety vremeni zamedleniya i migratsii; Medicion y Calculo del Tiempo de Moderacion y de Migracion

    Energy Technology Data Exchange (ETDEWEB)

    Profio, A. E.; Koppel, J. U. [General Atomic Division of General Dynamics Corporation, John Jay Hopkins Laboratory for Pure and Applied Science, San Diego, CA (United States); Adamantiades, A. [Massachusetts Institute of Technology, Cambridge, MA (United States)

    1965-08-15

    The mean time and variance in time for neutrons from an impulse source to slow down and migrate to the energy, angle and position of observation are important quantities in many experiments. The mean time is a correction in time-of-flight measurements of neutron spectra in bulk media, and the variance limits the ultimate resolution of such experiments. These parameters are equally significant in detectors which depend on moderation, in time-of-flight experiments where low energy neutrons are provided by a moderator placed near the pulsed source, and in slowing-down-time spectrometry. Various analytical and numerical methods have been developed to calculate the space-energy-angle-time . dependence, or integrals thereof. It is shown that the time moments, Empty-Set {sup (n)} (r, {Omega}, v, t) = {integral}{sub 0}{sup {infinity}}t{sup n} Empty-Set (r, {Omega}, v, t)dt, can be calculated by repeated application of a steady state transport code. The source term for the calculation of the n{sup -tb} moment is equal to nv{sup -1} Empty-Set {sup (n-1)}. Results are presented for multiplying and non-multiplying mockups of the TRIGA reactor. Another powerful calculational method is the time-dependent Monte Carlo code. Results of a calculation of leakage flux from a thin lead slab are presented. Measurements have been made of the slowing-down time to the cadmium edge and to the 1.46-eV resonance of indium in water and in toluene. Capture gamma rays are detected by a scintillation counter. The technique requires a fairly intense source and efficient detector because of the low duty cycle (short burst width for resolution of the slowing-down time, large interpulse period for thermal neutron die-away) and the small probability for capture. (author) [French] Dans le phenomene de ralentissement et de migration des neutrons puises, amenes a l'enetgie, a l'angle et S la position d'observation, le temps moyen et la variance en temps sont des grandeurs importantes pour beaucoup d

  1. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  2. Robotic intelligence kernel

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  3. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  4. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  5. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  6. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  7. kernel oil by lipolytic organisms

    African Journals Online (AJOL)

    USER

    2010-08-02

    Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.

  8. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  9. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  10. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  11. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  12. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  13. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  14. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  15. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  16. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  17. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  18. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  19. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  20. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  1. Measure of the slowing down area of thermal state 1,45 eV neutrons in the graphite; Mesure de l'aire de ralentissement des neutrons dans le graphite de 1,45 eV a l'etat thermique

    Energy Technology Data Exchange (ETDEWEB)

    Robert, C [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1956-12-15

    The life of a neutron emitted as a fast one within a moderating medium may be divided into two parts; at first it is slowed down colliding with the nuclei of the medium; later on, as soon as its energy is of the order of the thermal agitation energy, it diffuses until captured by some nucleus. As a matter of fact, dividing the evolution of a neutron into two phases, a slowing down phase and a diffusing phase is arbitrary, since the thermal equilibrium is attained stepwise. This report is intended to examine the means of inferring the distribution of thermal neutrons from the distribution of fast neutrons on the basis of the distribution of nascent thermal neutrons and of the law of scattering. (author) [French] La vie d'un neutron emis a l'etat rapide dans un milieu moderateur comprend 2 parties: au debut il se ralentit par chocs sur les noyaux du milieu; ensuite, lorsque son energie est de l'ordre de l'energie d'agitation thermique, il diffuse jusqu'a sa capture par un noyau. La separation de l'evolution d'un neutron en deux phases, phase de ralentissement, puis phase de diffusion, est, en realite, arbitraire; un neutron doit atteindre l'equilibre thermique progressivement. L'objet de ce rapport est de voir comment on peut deduire la repartition des thermiques de la repartition des rapides connaissant la repartition des thermiques naissants et la loi de diffusion. (au0010te.

  2. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  3. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  4. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  5. Complex use of cottonseed kernels

    Energy Technology Data Exchange (ETDEWEB)

    Glushenkova, A I

    1977-01-01

    A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.

  6. Kernel regression with functional response

    OpenAIRE

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  7. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  8. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  9. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  10. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  11. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  12. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  14. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  15. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  16. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  17. Validation of Born Traveltime Kernels

    Science.gov (United States)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  18. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  19. Kernel Bayesian ART and ARTMAP.

    Science.gov (United States)

    Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan

    2018-02-01

    Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Comparison of electron dose-point kernels in water generated by the Monte Carlo codes, PENELOPE, GEANT4, MCNPX, and ETRAN.

    Science.gov (United States)

    Uusijärvi, Helena; Chouin, Nicolas; Bernhardt, Peter; Ferrer, Ludovic; Bardiès, Manuel; Forssell-Aronsson, Eva

    2009-08-01

    Point kernels describe the energy deposited at a certain distance from an isotropic point source and are useful for nuclear medicine dosimetry. They can be used for absorbed-dose calculations for sources of various shapes and are also a useful tool when comparing different Monte Carlo (MC) codes. The aim of this study was to compare point kernels calculated by using the mixed MC code, PENELOPE (v. 2006), with point kernels calculated by using the condensed-history MC codes, ETRAN, GEANT4 (v. 8.2), and MCNPX (v. 2.5.0). Point kernels for electrons with initial energies of 10, 100, 500, and 1 MeV were simulated with PENELOPE. Spherical shells were placed around an isotropic point source at distances from 0 to 1.2 times the continuous-slowing-down-approximation range (R(CSDA)). Detailed (event-by-event) simulations were performed for electrons with initial energies of less than 1 MeV. For 1-MeV electrons, multiple scattering was included for energy losses less than 10 keV. Energy losses greater than 10 keV were simulated in a detailed way. The point kernels generated were used to calculate cellular S-values for monoenergetic electron sources. The point kernels obtained by using PENELOPE and ETRAN were also used to calculate cellular S-values for the high-energy beta-emitter, 90Y, the medium-energy beta-emitter, 177Lu, and the low-energy electron emitter, 103mRh. These S-values were also compared with the Medical Internal Radiation Dose (MIRD) cellular S-values. The greatest differences between the point kernels (mean difference calculated for distances, electrons was 1.4%, 2.5%, and 6.9% for ETRAN, GEANT4, and MCNPX, respectively, compared to PENELOPE, if omitting the S-values when the activity was distributed on the cell surface for 10-keV electrons. The largest difference between the cellular S-values for the radionuclides, between PENELOPE and ETRAN, was seen for 177Lu (1.2%). There were large differences between the MIRD cellular S-values and those obtained from

  1. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  2. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  3. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  4. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  5. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  6. Process for producing metal oxide kernels and kernels so obtained

    International Nuclear Information System (INIS)

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  7. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  8. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  9. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  10. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  11. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  12. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  13. Evolution kernel for the Dirac field

    International Nuclear Information System (INIS)

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  14. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  15. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  16. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  17. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  18. Slowing down to keep the lead in military technology

    OpenAIRE

    Blanken, Leo J.; Leopore, Jason J.

    2011-01-01

    The article of record as published may be found at http://dx.doi.org/10.1080/10242694.2010.491675 We develop a model of military technology competition among states. States can choose to introduce new military technology, mimic rivals’ level of technology, or withdraw from the contest. States can choose to implement any level of technology within their current feasible technologies. We find that states with significant technological leads should sometimes withhold new technologies...

  19. Adjoint P1 equations solution for neutron slowing down

    International Nuclear Information System (INIS)

    Cardoso, Carlos Eduardo Santos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    In some applications of perturbation theory, it is necessary know the adjoint neutron flux, which is obtained by the solution of adjoint neutron diffusion equation. However, the multigroup constants used for this are weighted in only the direct neutron flux, from the solution of direct P1 equations. In this work, the adjoint P1 equations are derived by the neutron transport equation, the reversion operators rules and analogies between direct and adjoint parameters. The direct and adjoint neutron fluxes resulting from the solution of P 1 equations were used to three different weighting processes, to obtain the macrogroup macroscopic cross sections. It was found out noticeable differences among them. (author)

  20. Estonian economic growth slows down / Olavi Grünvald

    Index Scriptorium Estoniae

    Grünvald, Olavi

    2008-01-01

    Ülevaade Eesti majandusliku arengu viimastest aastatest, 2007. aastal alanud majanduslangusest, mida iseloomustavad langus kinnisvaraturul, SKT kasvu aeglustumine, transiitkaubanduse vähenemine. Lisatud tabel diagramm ja graafikud

  1. Loan Growth Slowing down in 1st 7 Months

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ The growth of loans significantly slowed during the first seven months of this year, which indicates the government's economic macro control measures are taking effect, the People's Bank of China (PBOC) said. By the end of July, outstanding loans of all financial institutions stood at RMB18.1 trillion (US$2.18 trillion),up 15.9% on a year-on-year basis. The growth rate compared to 23.2% registered during the same period in 2003,the central bank said.

  2. Pollinator species richness: Are the declines slowing down?

    Directory of Open Access Journals (Sweden)

    Tom J. M. Van Dooren

    2016-09-01

    Full Text Available Changes in pollinator abundances and diversity are of major concern. A recent study inferred that pollinator species richnesses are decreasing more slowly in recent decades in several taxa and European countries. A more careful interpretation of these results reveals that this conclusion cannot be drawn and that we can only infer that declines decelerate for bees (Anthophila in the Netherlands.

  3. A tandem queue with server slow-down and blocking

    NARCIS (Netherlands)

    van Foreest, N.D.; van Ommeren, Jan C.W.; Mandjes, M.R.H.; Scheinhardt, Willem R.W.

    2005-01-01

    We consider two variants of a two-station tandem network with blocking. In both variants the first server ceases to work when the queue length at the second station hits a 'blocking threshold.' In addition, in variant 2 the first server decreases its service rate when the second queue exceeds a

  4. A tandem queue with server slow-down and blocking.

    NARCIS (Netherlands)

    van Foreest, N.; van Ommeren, J.C.; Mandjes, M.R.H.; Scheinhardt, W.

    2005-01-01

    We consider two variants of a two-station tandem network with blocking. In both variants the first server ceases to work when the queue length at the second station hits a 'blocking threshold.' In addition, in variant 2 the first server decreases its service rate when the second queue exceeds a

  5. Proprioceptive deafferentation slows down the processing of visual hand feedback

    DEFF Research Database (Denmark)

    Balslev, Daniela; Miall, R Chris; Cole, Jonathan

    2007-01-01

    During visually guided movements both vision and proprioception inform the brain about the position of the hand, so interaction between these two modalities is presumed. Current theories suggest that this interaction occurs by sensory information from both sources being fused into a more reliable...... proprioception facilitates the processing of visual information during motor control. Subjects used a computer mouse to move a cursor to a screen target. In 28% of the trials, pseudorandomly, the cursor was rotated or the target jumped. Reaction time for the trajectory correction in response to this perturbation......, multimodal, percept of hand location. In the literature on perception, however, there is evidence that different sensory modalities interact in the allocation of attention, so that a stimulus in one modality facilitates the processing of a stimulus in a different modality. We investigated whether...

  6. The roots to an equation in particle slowing down theory

    International Nuclear Information System (INIS)

    Sjoestrand, N.G.

    1979-08-01

    Previous work on the roots to an equation arising in studies on anisotropic neutron scattering has been extended to include parameter values of interest for a problem put forward by M.M.R. Williams. Detailed numerical results are given. (author)

  7. The Evolution of Computing: Slowing down? Not Yet!

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    Dr Sutherland will review the evolution of computing over the past decade, focusing particularly on the development of the database and middleware from client server to Internet computing. But what are the next steps from the perspective of a software company? Dr Sutherland will discuss the development of Grid as well as the future applications revolving around collaborative working, which are appearing as the next wave of computing applications.

  8. Bayesian Kernel Mixtures for Counts.

    Science.gov (United States)

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  9. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  10. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  11. Einstein Critical-Slowing-Down is Siegel CyberWar Denial-of-Access Queuing/Pinning/ Jamming/Aikido Via Siegel DIGIT-Physics BEC ``Intersection''-BECOME-UNION Barabasi Network/GRAPH-Physics BEC: Strutt/Rayleigh-Siegel Percolation GLOBALITY-to-LOCALITY Phase-Transition Critical-Phenomenon

    Science.gov (United States)

    Buick, Otto; Falcon, Pat; Alexander, G.; Siegel, Edward Carl-Ludwig

    2013-03-01

    Einstein[Dover(03)] critical-slowing-down(CSD)[Pais, Subtle in The Lord; Life & Sci. of Albert Einstein(81)] is Siegel CyberWar denial-of-access(DOA) operations-research queuing theory/pinning/jamming/.../Read [Aikido, Aikibojitsu & Natural-Law(90)]/Aikido(!!!) phase-transition critical-phenomenon via Siegel DIGIT-Physics (Newcomb[Am.J.Math. 4,39(1881)]-{Planck[(1901)]-Einstein[(1905)])-Poincare[Calcul Probabilités(12)-p.313]-Weyl [Goett.Nachr.(14); Math.Ann.77,313 (16)]-{Bose[(24)-Einstein[(25)]-Fermi[(27)]-Dirac[(1927)]}-``Benford''[Proc.Am.Phil.Soc. 78,4,551 (38)]-Kac[Maths.Stat.-Reasoning(55)]-Raimi[Sci.Am. 221,109 (69)...]-Jech[preprint, PSU(95)]-Hill[Proc.AMS 123,3,887(95)]-Browne[NYT(8/98)]-Antonoff-Smith-Siegel[AMS Joint-Mtg.,S.-D.(02)] algebraic-inversion to yield ONLY BOSE-EINSTEIN QUANTUM-statistics (BEQS) with ZERO-digit Bose-Einstein CONDENSATION(BEC) ``INTERSECTION''-BECOME-UNION to Barabasi[PRL 876,5632(01); Rev.Mod.Phys.74,47(02)...] Network /Net/GRAPH(!!!)-physics BEC: Strutt/Rayleigh(1881)-Polya(21)-``Anderson''(58)-Siegel[J.Non-crystalline-Sol.40,453(80)

  12. Analytical continuous slowing down model for nuclear reaction cross-section measurements by exploitation of stopping for projectile energy scanning and results for {sup 13}C({sup 3}He,α){sup 12}C and {sup 13}C({sup 3}He,p){sup 15}N

    Energy Technology Data Exchange (ETDEWEB)

    Möller, S., E-mail: s.moeller@fz-juelich.de

    2017-03-01

    Ion beam analysis is a set of precise, calibration free and non-destructive methods for determining surface-near concentrations of potentially all elements and isotopes in a single measurement. For determination of concentrations the reaction cross-section of the projectile with the targets has to be known, in general at the primary beam energy and all energies below. To reduce the experimental effort of cross-section measurements a new method is presented here. The method is based on the projectile energy reduction when passing matter of thick targets. The continuous slowing down approximation is used to determine cross-sections from a thick target at projectile energies below the primary energy by backward calculation of the measured product spectra. Results for {sup 12}C({sup 3}He,p){sup 14}N below 4.5 MeV are in rough agreement with literature data and reproduce the measured spectra. New data for reactions of {sup 3}He with {sup 13}C are acquired using the new technique. The applied approximations and further applications are discussed.

  13. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  14. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  15. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  16. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  17. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  18. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  19. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  20. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  1. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  2. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  3. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  4. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  5. Ensemble Approach to Building Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...

  6. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  7. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  8. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  9. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  10. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  11. Moderation of Neutrons Emitted by a Pulsed Source and Neutron Spectrometry Based on Slowing-Down Time; Ralentissement des Neutrons Emis par une Source Pulsee et Leur Spectrometrie en Fonction du Temps de Ralentissement; Zamedlenie nejtronov, ispuskaemykh impul'snym istochnikom, i spektrometriya nejtronov po vremeni zamedleniya; Moderacion de Neutrones Emitidos por una Pitente Pulsada y Espectrometria Neutronica Basada en el Tiempo de Frenado

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, A. A.; Isakov, A. I.; Kazarnovskij, M. V.; Popov, Ju. P.; Shapiro, F. L. [Fizicheskij Institut Im. P.N. Lebedeva AN SSSR, Moskva, SSSR (Russian Federation)

    1965-08-15

    Over the past ten years research has been going on at the P.N. Lebedev Physics Institute on the non-stationary moderation of neutrons in heavy media, the development of a method of neutron spectrometry based on the slowing-down time and the use of this method in studying the energy dependence of the cross-sections of nuclear reactions produced by neutrons with energy up to 30 keV. The authors review this work and discuss the results achieved. After a brief discussion of the theory of the non-stationary moderation and thermalization of neutrons the authors set forth the results of experimental studies of neutron moderation in graphite, iron and lead, and of neutron thermalization in lead. Using a pulsed neutron source and resonance detectors the distribution of slowing-down times was measured up to a series of fixed values for final neutron energy. The results are compared with theory, which takes into account the thermal motion of the moderator atoms; in the case of lead this thermal motion leads to a measurable spread in the slowing-down times at energies below 10 eV. The relationship between the mean velocity of neutrons in lead and the slowing-down time is measured in the subcadmium energy range and a comparison made with multigroup theory. The procedure for determining the energy dependence of neutron reaction cross-sections by slowing-down time is described and the potentialities of this method of spectrometry discussed. There follows a brief discussion of the results obtained in two fields of spectrometric measurement. Firstly, precise measurement of the relative excitation functions of the following reactions: He{sup 3}(n, p), Li{sup 6}(n, {alpha}), B{sup 10}(n, {alpha}) and N{sup 14}(n, p) - the most interesting results being the discovery of a constant negative component of the reaction cross-section and indications of the existence of an excited He{sup 4} level. Secondly, measurement of the energy dependence of averaged radiative capture cross

  12. Aflatoxin contamination of developing corn kernels.

    Science.gov (United States)

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  13. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  14. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  15. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  16. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  17. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  18. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  19. Fluidization calculation on nuclear fuel kernel coating

    International Nuclear Information System (INIS)

    Sukarsono; Wardaya; Indra-Suryawan

    1996-01-01

    The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program

  20. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  1. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  2. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  3. Influence of differently processed mango seed kernel meal on ...

    African Journals Online (AJOL)

    Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.

  4. On methods to increase the security of the Linux kernel

    International Nuclear Information System (INIS)

    Matvejchikov, I.V.

    2014-01-01

    Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru

  5. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  6. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  7. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  8. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  9. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  10. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  11. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  12. Quantum logic in dagger kernel categories

    NARCIS (Netherlands)

    Heunen, C.; Jacobs, B.P.F.

    2009-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  13. Quantum logic in dagger kernel categories

    NARCIS (Netherlands)

    Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P.

    2011-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  14. Symbol recognition with kernel density matching.

    Science.gov (United States)

    Zhang, Wan; Wenyin, Liu; Zhang, Kun

    2006-12-01

    We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

  15. Flexible Scheduling in Multimedia Kernels: An Overview

    NARCIS (Netherlands)

    Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.

    1999-01-01

    Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more

  16. Reproducing kernel Hilbert spaces of Gaussian priors

    NARCIS (Netherlands)

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  17. A synthesis of empirical plant dispersal kernels

    Czech Academy of Sciences Publication Activity Database

    Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.

    2017-01-01

    Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016

  18. Analytic continuation of weighted Bergman kernels

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2010-01-01

    Roč. 94, č. 6 (2010), s. 622-650 ISSN 0021-7824 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * analytic continuation * Toeplitz operator Subject RIV: BA - General Mathematics Impact factor: 1.450, year: 2010 http://www.sciencedirect.com/science/article/pii/S0021782410000942

  19. On convergence of kernel learning estimators

    NARCIS (Netherlands)

    Norkin, V.I.; Keyzer, M.A.

    2009-01-01

    The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability

  20. Analytic properties of the Virasoro modular kernel

    Energy Technology Data Exchange (ETDEWEB)

    Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)

    2017-06-15

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)

  1. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  2. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  3. Scattering kernels and cross sections working group

    International Nuclear Information System (INIS)

    Russell, G.; MacFarlane, B.; Brun, T.

    1998-01-01

    Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics

  4. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  5. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    African Journals Online (AJOL)

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  6. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  7. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  8. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  9. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  10. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  11. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  12. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  13. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  14. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  15. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  16. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  17. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  18. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  19. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  20. Analytic scattering kernels for neutron thermalization studies

    International Nuclear Information System (INIS)

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  1. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  2. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  3. Wilson Dslash Kernel From Lattice QCD Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  4. A Kernel for Protein Secondary Structure Prediction

    OpenAIRE

    Guermeur , Yann; Lifchitz , Alain; Vert , Régis

    2004-01-01

    http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...

  5. Scalar contribution to the BFKL kernel

    International Nuclear Information System (INIS)

    Gerasimov, R. E.; Fadin, V. S.

    2010-01-01

    The contribution of scalar particles to the kernel of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation is calculated. A great cancellation between the virtual and real parts of this contribution, analogous to the cancellation in the quark contribution in QCD, is observed. The reason of this cancellation is discovered. This reason has a common nature for particles with any spin. Understanding of this reason permits to obtain the total contribution without the complicated calculations, which are necessary for finding separate pieces.

  6. Weighted Bergman Kernels for Logarithmic Weights

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2010-01-01

    Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/

  7. Heat kernels and zeta functions on fractals

    International Nuclear Information System (INIS)

    Dunne, Gerald V

    2012-01-01

    On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)

  8. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  9. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  10. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  11. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  12. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  13. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  14. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  15. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  16. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  17. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  18. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  19. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  20. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    Science.gov (United States)

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  1. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  2. Analysis of Advanced Fuel Kernel Technology

    International Nuclear Information System (INIS)

    Oh, Seung Chul; Jeong, Kyung Chai; Kim, Yeon Ku; Kim, Young Min; Kim, Woong Ki; Lee, Young Woo; Cho, Moon Sung

    2010-03-01

    The reference fuel for prismatic reactor concepts is based on use of an LEU UCO TRISO fissile particle. This fuel form was selected in the early 1980s for large high-temperature gas-cooled reactor (HTGR) concepts using LEU, and the selection was reconfirmed for modular designs in the mid-1980s. Limited existing irradiation data on LEU UCO TRISO fuel indicate the need for a substantial improvement in performance with regard to in-pile gaseous fission product release. Existing accident testing data on LEU UCO TRISO fuel are extremely limited, but it is generally expected that performance would be similar to that of LEU UO 2 TRISO fuel if performance under irradiation were successfully improved. Initial HTGR fuel technology was based on carbide fuel forms. In the early 1980s, as HTGR technology was transitioning from high-enriched uranium (HEU) fuel to LEU fuel. An initial effort focused on LEU prismatic design for large HTGRs resulted in the selection of UCO kernels for the fissile particles and thorium oxide (ThO 2 ) for the fertile particles. The primary reason for selection of the UCO kernel over UO 2 was reduced CO pressure, allowing higher burnup for equivalent coating thicknesses and reduced potential for kernel migration, an important failure mechanism in earlier fuels. A subsequent assessment in the mid-1980s considering modular HTGR concepts again reached agreement on UCO for the fissile particle for a prismatic design. In the early 1990s, plant cost-reduction studies led to a decision to change the fertile material from thorium to natural uranium, primarily because of a lower long-term decay heat level for the natural uranium fissile particles. Ongoing economic optimization in combination with anticipated capabilities of the UCO particles resulted in peak fissile particle burnup projection of 26% FIMA in steam cycle and gas turbine concepts

  3. Learning Rotation for Kernel Correlation Filter

    KAUST Repository

    Hamdi, Abdullah

    2017-08-11

    Kernel Correlation Filters have shown a very promising scheme for visual tracking in terms of speed and accuracy on several benchmarks. However it suffers from problems that affect its performance like occlusion, rotation and scale change. This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate rotation from one frame to another and enhance the detection of KCF. Hence it gains boost in overall accuracy in many of OBT50 detest videos with minimal additional computation.

  4. Research of Performance Linux Kernel File Systems

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.

  5. Fixed kernel regression for voltammogram feature extraction

    International Nuclear Information System (INIS)

    Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N

    2009-01-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals

  6. Reciprocity relation for multichannel coupling kernels

    International Nuclear Information System (INIS)

    Cotanch, S.R.; Satchler, G.R.

    1981-01-01

    Assuming time-reversal invariance of the many-body Hamiltonian, it is proven that the kernels in a general coupled-channels formulation are symmetric, to within a specified spin-dependent phase, under the interchange of channel labels and coordinates. The theorem is valid for both Hermitian and suitably chosen non-Hermitian Hamiltonians which contain complex effective interactions. While of direct practical consequence for nuclear rearrangement reactions, the reciprocity relation is also appropriate for other areas of physics which involve coupled-channels analysis

  7. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.

  8. Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    DEFF Research Database (Denmark)

    Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.

    2013-01-01

    correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...

  9. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. The Kernel Estimation in Biosystems Engineering

    Directory of Open Access Journals (Sweden)

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  11. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  12. Aligning Biomolecular Networks Using Modular Graph Kernels

    Science.gov (United States)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  13. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  14. Formal truncations of connected kernel equations

    International Nuclear Information System (INIS)

    Dixon, R.M.

    1977-01-01

    The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems

  15. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  16. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  17. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  18. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  19. Replacement Value of Palm Kernel Meal for Maize on Carcass ...

    African Journals Online (AJOL)

    This study was conducted to evaluate the effect of replacing maize with palm kernel meal on nutrient composition, fatty acid profile and sensory qualities of the meat of turkeys fed the dietary treatments. Six dietary treatments were formulated using palm kernel meal to replace maize at 0, 20, 40, 60, 80 and 100 percent.

  20. Effect of Palm Kernel Cake Replacement and Enzyme ...

    African Journals Online (AJOL)

    A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...

  1. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  2. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan

    This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...

  3. Commutators of Integral Operators with Variable Kernels on Hardy ...

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 4. Commutators of Integral Operators with Variable Kernels on Hardy Spaces. Pu Zhang Kai Zhao. Volume 115 Issue 4 November 2005 pp 399-410 ... Keywords. Singular and fractional integrals; variable kernel; commutator; Hardy space.

  4. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  5. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  6. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...

  7. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  8. Design and construction of palm kernel cracking and separation ...

    African Journals Online (AJOL)

    Design and construction of palm kernel cracking and separation machines. ... Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Design and construction of palm kernel cracking and separation machines. JO Nordiana, K ...

  9. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...

  10. Genetic relationship between plant growth, shoot and kernel sizes in ...

    African Journals Online (AJOL)

    Maize (Zea mays L.) ear vascular tissue transports nutrients that contribute to grain yield. To assess kernel heritabilities that govern ear development and plant growth, field studies were conducted to determine the combining abilities of parents that differed for kernel-size, grain-filling rates and shoot-size. Thirty two hybrids ...

  11. A relationship between Gel'fand-Levitan and Marchenko kernels

    International Nuclear Information System (INIS)

    Kirst, T.; Von Geramb, H.V.; Amos, K.A.

    1989-01-01

    An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs

  12. Boundary singularity of Poisson and harmonic Bergman kernels

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2015-01-01

    Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170

  13. Oven-drying reduces ruminal starch degradation in maize kernels

    NARCIS (Netherlands)

    Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.

    2014-01-01

    The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels

  14. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  15. Resolvent kernel for the Kohn Laplacian on Heisenberg groups

    Directory of Open Access Journals (Sweden)

    Neur Eddine Askour

    2002-07-01

    Full Text Available We present a formula that relates the Kohn Laplacian on Heisenberg groups and the magnetic Laplacian. Then we obtain the resolvent kernel for the Kohn Laplacian and find its spectral density. We conclude by obtaining the Green kernel for fractional powers of the Kohn Laplacian.

  16. Reproducing Kernels and Coherent States on Julia Sets

    Energy Technology Data Exchange (ETDEWEB)

    Thirulogasanthar, K., E-mail: santhar@cs.concordia.ca; Krzyzak, A. [Concordia University, Department of Computer Science and Software Engineering (Canada)], E-mail: krzyzak@cs.concordia.ca; Honnouvo, G. [Concordia University, Department of Mathematics and Statistics (Canada)], E-mail: g_honnouvo@yahoo.fr

    2007-11-15

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems.

  17. Reproducing Kernels and Coherent States on Julia Sets

    International Nuclear Information System (INIS)

    Thirulogasanthar, K.; Krzyzak, A.; Honnouvo, G.

    2007-01-01

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems

  18. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  19. Comparison of Kernel Equating and Item Response Theory Equating Methods

    Science.gov (United States)

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  20. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  1. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    Science.gov (United States)

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  2. Computing an element in the lexicographic kernel of a game

    NARCIS (Netherlands)

    Faigle, U.; Kern, Walter; Kuipers, Jeroen

    The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a

  3. Computing an element in the lexicographic kernel of a game

    NARCIS (Netherlands)

    Faigle, U.; Kern, Walter; Kuipers, J.

    2002-01-01

    The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a

  4. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  5. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  6. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  7. Compactly Supported Basis Functions as Support Vector Kernels for Classification.

    Science.gov (United States)

    Wittek, Peter; Tan, Chew Lim

    2011-10-01

    Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.

  8. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  9. A method for manufacturing kernels of metallic oxides and the thus obtained kernels

    International Nuclear Information System (INIS)

    Lelievre Bernard; Feugier, Andre.

    1973-01-01

    A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr

  10. Learning molecular energies using localized graph kernels

    Science.gov (United States)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  11. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  12. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  13. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  14. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  15. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  16. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  17. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  19. On weights which admit the reproducing kernel of Bergman type

    Directory of Open Access Journals (Sweden)

    Zbigniew Pasternak-Winiarski

    1992-01-01

    Full Text Available In this paper we consider (1 the weights of integration for which the reproducing kernel of the Bergman type can be defined, i.e., the admissible weights, and (2 the kernels defined by such weights. It is verified that the weighted Bergman kernel has the analogous properties as the classical one. We prove several sufficient conditions and necessary and sufficient conditions for a weight to be an admissible weight. We give also an example of a weight which is not of this class. As a positive example we consider the weight μ(z=(Imz2 defined on the unit disk in ℂ.

  20. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  1. Flour quality and kernel hardness connection in winter wheat

    Directory of Open Access Journals (Sweden)

    Szabó B. P.

    2016-12-01

    Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.

  2. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  3. Explicit signal to noise ratio in reproducing kernel Hilbert spaces

    DEFF Research Database (Denmark)

    Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo

    2011-01-01

    This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...

  4. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  5. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  6. Efficient Online Subspace Learning With an Indefinite Kernel for Visual Tracking and Recognition

    NARCIS (Netherlands)

    Liwicki, Stephan; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Pantic, Maja

    2012-01-01

    We propose an exact framework for online learning with a family of indefinite (not positive) kernels. As we study the case of nonpositive kernels, we first show how to extend kernel principal component analysis (KPCA) from a reproducing kernel Hilbert space to Krein space. We then formulate an

  7. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    International Nuclear Information System (INIS)

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  8. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  9. Bioconversion of palm kernel meal for aquaculture: Experiences ...

    African Journals Online (AJOL)

    SERVER

    2008-04-17

    Apr 17, 2008 ... es as well as food supplies have existed traditionally with coastal regions of Liberia and ..... Contamination of palm kernel meal with Aspergillus ... Sciences, Universiti Sains Malaysia, Penang 11800, Malaysia. Aquacult. Res.

  10. The effect of apricot kernel flour incorporation on the ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-01-05

    Jan 5, 2009 ... 2Department of Food Engineering, Erciyes University 38039, Kayseri, Turkey. Accepted 27 ... Key words: Noodle; apricot kernel, flour, cooking, sensory properties. ... their simple preparation requirement, desirable sensory.

  11. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate

  12. Kernel-based noise filtering of neutron detector signals

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki

    2007-01-01

    This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests

  13. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  14. Resummed memory kernels in generalized system-bath master equations

    International Nuclear Information System (INIS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  15. On Improving Convergence Rates for Nonnegative Kernel Density Estimators

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1980-01-01

    To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...

  16. Improved Variable Window Kernel Estimates of Probability Densities

    OpenAIRE

    Hall, Peter; Hu, Tien Chung; Marron, J. S.

    1995-01-01

    Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...

  17. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1982-10-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The graphical method also leads to a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  18. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  19. The Flux OSKit: A Substrate for Kernel and Language Research

    Science.gov (United States)

    1997-10-01

    unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 tions. Our own microkernel -based OS, Fluke [17], puts almost all of the OSKit to use...kernels distance the language from the hardware; even microkernels and other extensible kernels enforce some default policy which often conflicts with a...be particu- larly useful in these research projects. 6.1.1 The Fluke OS In 1996 we developed an entirely new microkernel - based system called Fluke

  20. Salus: Kernel Support for Secure Process Compartments

    Directory of Open Access Journals (Sweden)

    Raoul Strackx

    2015-01-01

    Full Text Available Consumer devices are increasingly being used to perform security and privacy critical tasks. The software used to perform these tasks is often vulnerable to attacks, due to bugs in the application itself or in included software libraries. Recent work proposes the isolation of security-sensitive parts of applications into protected modules, each of which can be accessed only through a predefined public interface. But most parts of an application can be considered security-sensitive at some level, and an attacker who is able to gain inapplication level access may be able to abuse services from protected modules. We propose Salus, a Linux kernel modification that provides a novel approach for partitioning processes into isolated compartments sharing the same address space. Salus significantly reduces the impact of insecure interfaces and vulnerable compartments by enabling compartments (1 to restrict the system calls they are allowed to perform, (2 to authenticate their callers and callees and (3 to enforce that they can only be accessed via unforgeable references. We describe the design of Salus, report on a prototype implementation and evaluate it in terms of security and performance. We show that Salus provides a significant security improvement with a low performance overhead, without relying on any non-standard hardware support.

  1. Local Kernel for Brains Classification in Schizophrenia

    Science.gov (United States)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  2. KERNEL MAD ALGORITHM FOR RELATIVE RADIOMETRIC NORMALIZATION

    Directory of Open Access Journals (Sweden)

    Y. Bai

    2016-06-01

    Full Text Available The multivariate alteration detection (MAD algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA. The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1 data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  3. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  4. A new discrete dipole kernel for quantitative susceptibility mapping.

    Science.gov (United States)

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Exploration of Shorea robusta (Sal seeds, kernels and its oil

    Directory of Open Access Journals (Sweden)

    Shashi Kumar C.

    2016-12-01

    Full Text Available Physical, mechanical, and chemical properties of Shorea robusta seed with wing, seed without wing, and kernel were investigated in the present work. The physico-chemical composition of sal oil was also analyzed. The physico-mechanical properties and proximate composition of seed with wing, seed without wing, and kernel at three moisture contents of 9.50% (w.b, 9.54% (w.b, and 12.14% (w.b, respectively, were studied. The results show that the moisture content of the kernel was highest as compared to seed with wing and seed without wing. The sphericity of the kernel was closer to that of a sphere as compared to seed with wing and seed without wing. The hardness of the seed with wing (32.32, N/mm and seed without wing (42.49, N/mm was lower than the kernels (72.14, N/mm. The proximate composition such as moisture, protein, carbohydrates, oil, crude fiber, and ash content were also determined. The kernel (30.20%, w/w contains higher oil percentage as compared to seed with wing and seed without wing. The scientific data from this work are important for designing of equipment and processes for post-harvest value addition of sal seeds.

  6. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Science.gov (United States)

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  7. Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code

    International Nuclear Information System (INIS)

    Rothenstein, W.

    1999-01-01

    In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T

  8. Proteome analysis of the almond kernel (Prunus dulcis).

    Science.gov (United States)

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  9. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  10. Scientific opinion on the acute health risks related to the presence of cyanogenic glycosides in raw apricot kernels and products derived from raw apricot kernels

    DEFF Research Database (Denmark)

    Petersen, Annette

    of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...

  11. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  12. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  13. Generalized synthetic kernel approximation for elastic moderation of fast neutrons

    International Nuclear Information System (INIS)

    Yamamoto, Koji; Sekiya, Tamotsu; Yamamura, Yasunori.

    1975-01-01

    A method of synthetic kernel approximation is examined in some detail with a view to simplifying the treatment of the elastic moderation of fast neutrons. A sequence of unified kernel (fsub(N)) is introduced, which is then divided into two subsequences (Wsub(n)) and (Gsub(n)) according to whether N is odd (Wsub(n)=fsub(2n-1), n=1,2, ...) or even (Gsub(n)=fsub(2n), n=0,1, ...). The W 1 and G 1 kernels correspond to the usual Wigner and GG kernels, respectively, and the Wsub(n) and Gsub(n) kernels for n>=2 represent generalizations thereof. It is shown that the Wsub(n) kernel solution with a relatively small n (>=2) is superior on the whole to the Gsub(n) kernel solution for the same index n, while both converge to the exact values with increasing n. To evaluate the collision density numerically and rapidly, a simple recurrence formula is derived. In the asymptotic region (except near resonances), this recurrence formula allows calculation with a relatively coarse mesh width whenever hsub(a)<=0.05 at least. For calculations in the transient lethargy region, a mesh width of order epsilon/10 is small enough to evaluate the approximate collision density psisub(N) with an accuracy comparable to that obtained analytically. It is shown that, with the present method, an order of approximation of about n=7 should yield a practically correct solution diviating not more than 1% in collision density. (auth.)

  14. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  15. Calculation of electron and isotopes dose point kernels with FLUKA Monte Carlo code for dosimetry in nuclear medicine therapy.

    Science.gov (United States)

    Botta, F; Mairani, A; Battistoni, G; Cremonesi, M; Di Dia, A; Fassò, A; Ferrari, A; Ferrari, M; Paganelli, G; Pedroli, G; Valente, M

    2011-07-01

    The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. FLUKA outcomes have been compared to PENELOPE v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (ETRAN, GEANT4, MCNPX) has been done. Maximum percentage differences within 0.8.RCSDA and 0.9.RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8.X90 and 0.9.X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9.RCSDA and 0.9.X90 for electrons and isotopes, respectively. Concerning monoenergetic electrons, within 0.8.RCSDA (where 90%-97% of the particle energy is deposed), FLUKA and PENELOPE agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The

  16. Collision kernels in the eikonal approximation for Lennard-Jones interaction potential

    International Nuclear Information System (INIS)

    Zielinska, S.

    1985-03-01

    The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)

  17. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  18. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    Science.gov (United States)

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  19. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  20. On flame kernel formation and propagation in premixed gases

    Energy Technology Data Exchange (ETDEWEB)

    Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  1. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    Science.gov (United States)

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  2. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  4. Semisupervised kernel marginal Fisher analysis for face recognition.

    Science.gov (United States)

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  5. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  6. Heat Kernel Asymptotics of Zaremba Boundary Value Problem

    Energy Technology Data Exchange (ETDEWEB)

    Avramidi, Ivan G. [Department of Mathematics, New Mexico Institute of Mining and Technology (United States)], E-mail: iavramid@nmt.edu

    2004-03-15

    The Zaremba boundary-value problem is a boundary value problem for Laplace-type second-order partial differential operators acting on smooth sections of a vector bundle over a smooth compact Riemannian manifold with smooth boundary but with discontinuous boundary conditions, which include Dirichlet boundary conditions on one part of the boundary and Neumann boundary conditions on another part of the boundary. We study the heat kernel asymptotics of Zaremba boundary value problem. The construction of the asymptotic solution of the heat equation is described in detail and the heat kernel is computed explicitly in the leading approximation. Some of the first nontrivial coefficients of the heat kernel asymptotic expansion are computed explicitly.

  7. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    Science.gov (United States)

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  8. Rational kernels for Arabic Root Extraction and Text Classification

    Directory of Open Access Journals (Sweden)

    Attia Nehar

    2016-04-01

    Full Text Available In this paper, we address the problems of Arabic Text Classification and root extraction using transducers and rational kernels. We introduce a new root extraction approach on the basis of the use of Arabic patterns (Pattern Based Stemmer. Transducers are used to model these patterns and root extraction is done without relying on any dictionary. Using transducers for extracting roots, documents are transformed into finite state transducers. This document representation allows us to use and explore rational kernels as a framework for Arabic Text Classification. Root extraction experiments are conducted on three word collections and yield 75.6% of accuracy. Classification experiments are done on the Saudi Press Agency dataset and N-gram kernels are tested with different values of N. Accuracy and F1 report 90.79% and 62.93% respectively. These results show that our approach, when compared with other approaches, is promising specially in terms of accuracy and F1.

  9. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  10. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    Science.gov (United States)

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  11. Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee

    2011-01-01

    The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application

  12. Theoretical developments for interpreting kernel spectral clustering from alternative viewpoints

    Directory of Open Access Journals (Sweden)

    Diego Peluffo-Ordóñez

    2017-08-01

    Full Text Available To perform an exploration process over complex structured data within unsupervised settings, the so-called kernel spectral clustering (KSC is one of the most recommended and appealing approaches, given its versatility and elegant formulation. In this work, we explore the relationship between (KSC and other well-known approaches, namely normalized cut clustering and kernel k-means. To do so, we first deduce a generic KSC model from a primal-dual formulation based on least-squares support-vector machines (LS-SVM. For experiments, KSC as well as other consider methods are assessed on image segmentation tasks to prove their usability.

  13. Modelling microwave heating of discrete samples of oil palm kernels

    International Nuclear Information System (INIS)

    Law, M.C.; Liew, E.L.; Chang, S.L.; Chan, Y.S.; Leo, C.P.

    2016-01-01

    Highlights: • Microwave (MW) drying of oil palm kernels is experimentally determined and modelled. • MW heating of discrete samples of oil palm kernels (OPKs) is simulated. • OPK heating is due to contact effect, MW interference and heat transfer mechanisms. • Electric field vectors circulate within OPKs sample. • Loosely-packed arrangement improves temperature uniformity of OPKs. - Abstract: Recently, microwave (MW) pre-treatment of fresh palm fruits has showed to be environmentally friendly compared to the existing oil palm milling process as it eliminates the condensate production of palm oil mill effluent (POME) in the sterilization process. Moreover, MW-treated oil palm fruits (OPF) also possess better oil quality. In this work, the MW drying kinetic of the oil palm kernels (OPK) was determined experimentally. Microwave heating/drying of oil palm kernels was modelled and validated. The simulation results show that temperature of an OPK is not the same over the entire surface due to constructive and destructive interferences of MW irradiance. The volume-averaged temperature of an OPK is higher than its surface temperature by 3–7 °C, depending on the MW input power. This implies that point measurement of temperature reading is inadequate to determine the temperature history of the OPK during the microwave heating process. The simulation results also show that arrangement of OPKs in a MW cavity affects the kernel temperature profile. The heating of OPKs were identified to be affected by factors such as local electric field intensity due to MW absorption, refraction, interference, the contact effect between kernels and also heat transfer mechanisms. The thermal gradient patterns of OPKs change as the heating continues. The cracking of OPKs is expected to occur first in the core of the kernel and then it propagates to the kernel surface. The model indicates that drying of OPKs is a much slower process compared to its MW heating. The model is useful

  14. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1983-01-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The basic result is the application of graphical methods to the derivation of interaction-set equations. This yields a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  15. Reproducing Kernel Method for Solving Nonlinear Differential-Difference Equations

    Directory of Open Access Journals (Sweden)

    Reza Mokhtari

    2012-01-01

    Full Text Available On the basis of reproducing kernel Hilbert spaces theory, an iterative algorithm for solving some nonlinear differential-difference equations (NDDEs is presented. The analytical solution is shown in a series form in a reproducing kernel space, and the approximate solution , is constructed by truncating the series to terms. The convergence of , to the analytical solution is also proved. Results obtained by the proposed method imply that it can be considered as a simple and accurate method for solving such differential-difference problems.

  16. Kernel and divergence techniques in high energy physics separations

    Science.gov (United States)

    Bouř, Petr; Kůs, Václav; Franc, Jiří

    2017-10-01

    Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.

  17. Rebootless Linux Kernel Patching with Ksplice Uptrack at BNL

    International Nuclear Information System (INIS)

    Hollowell, Christopher; Pryor, James; Smith, Jason

    2012-01-01

    Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2,000 hosts running Scientific Linux and Red Hat Enterprise Linux. The use of this software has minimized downtime, and increased our security posture. In this paper, we provide an overview of Ksplice's rebootless kernel patch creation/insertion mechanism, and our experiences with Uptrack.

  18. Employment of kernel methods on wind turbine power performance assessment

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.

    2015-01-01

    A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...

  19. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm...... is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features for genre prediction. The upshot is that the method has strong expressive power even with rather few features, is clearly outperforming the ordinary kernel PLS, and therefore is an appealing method...

  20. Supervised Kernel Optimized Locality Preserving Projection with Its Application to Face Recognition and Palm Biometrics

    Directory of Open Access Journals (Sweden)

    Chuang Lin

    2015-01-01

    Full Text Available Kernel Locality Preserving Projection (KLPP algorithm can effectively preserve the neighborhood structure of the database using the kernel trick. We have known that supervised KLPP (SKLPP can preserve within-class geometric structures by using label information. However, the conventional SKLPP algorithm endures the kernel selection which has significant impact on the performances of SKLPP. In order to overcome this limitation, a method named supervised kernel optimized LPP (SKOLPP is proposed in this paper, which can maximize the class separability in kernel learning. The proposed method maps the data from the original space to a higher dimensional kernel space using a data-dependent kernel. The adaptive parameters of the data-dependent kernel are automatically calculated through optimizing an objective function. Consequently, the nonlinear features extracted by SKOLPP have larger discriminative ability compared with SKLPP and are more adaptive to the input data. Experimental results on ORL, Yale, AR, and Palmprint databases showed the effectiveness of the proposed method.

  1. Comparative histological and transcriptional analysis of maize kernels infected with Aspergillus flavus and Fusarium verticillioides

    Science.gov (United States)

    Aspergillus flavus and Fusarium verticillioides infect maize kernels and contaminate them with the mycotoxins aflatoxin and fumonisin, respectively. Combined histological examination of fungal colonization and transcriptional changes in maize kernels at 4, 12, 24, 48, and 72 hours post inoculation (...

  2. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  3. The heating of UO_2 kernels in argon gas medium on the physical properties of sintered UO_2 kernels

    International Nuclear Information System (INIS)

    Damunir; Sri Rinanti Susilowati; Ariyani Kusuma Dewi

    2015-01-01

    The heating of UO_2 kernels in argon gas medium on the physical properties of sinter UO_2 kernels was conducted. The heated of the UO_2 kernels was conducted in a sinter reactor of a bed type. The sample used was the UO_2 kernels resulted from the reduction results at 800 °C temperature for 3 hours that had the density of 8.13 g/cm"3; porosity of 0.26; O/U ratio of 2.05; diameter of 1146 μm and sphericity of 1.05. The sample was put into a sinter reactor, then it was vacuumed by flowing the argon gas at 180 mmHg pressure to drain the air from the reactor. After that, the cooling water and argon gas were continuously flowed with the pressure of 5 mPa with 1.5 liter/minutes velocity. The reactor temperature was increased and variated at 1200-1500 °C temperature and for 1-4 hours. The sinters UO_2 kernels resulted from the study were analyzed in term of their physical properties including the density, porosity, diameter, sphericity, and specific surface area. The density was analyzed using pycnometer with CCl_4 solution. The porosity was determined using Haynes equation. The diameters and sphericity were showed using the Dino-lite microscope. The specific surface area was determined using surface area meter Nova-1000. The obtained products showed the the heating of UO_2 kernel in argon gas medium were influenced on the physical properties of sinters UO_2 kernel. The condition of best relatively at 1400 °C temperature and 2 hours time. The product resulted from the study was relatively at its best when heating was conducted at 1400 °C temperature and 2 hours time, produced sinters UO_2 kernel with density of 10.14 gr/ml; porosity of 7 %; diameters of 893 μm; sphericity of 1.07 and specific surface area of 4.68 m"2/g with solidify shrinkage of 22 %. (author)

  4. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    International Nuclear Information System (INIS)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays

  5. The dipole form of the gluon part of the BFKL kernel

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Grabovsky, A.V.; Papa, A.

    2007-01-01

    The dipole form of the gluon part of the color singlet BFKL kernel in the next-to-leading order (NLO) is obtained in the coordinate representation by direct transfer from the momentum representation, where the kernel was calculated before. With this paper the transformation of the NLO BFKL kernel to the dipole form, started a few months ago with the quark part of the kernel, is completed

  6. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  7. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    Science.gov (United States)

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Flexible Scheduling by Deadline Inheritance in Soft Real Time Kernels

    NARCIS (Netherlands)

    Jansen, P.G.; Wygerink, Emiel

    1996-01-01

    Current Hard Real Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes HRT scheduling techniques inadequate for use in Soft Real Time (SRT) environment where we can make a considerable profit by a better and more

  9. MARMER, a flexible point-kernel shielding code

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1990-01-01

    A point-kernel shielding code entitled MARMER is described. It has several options with respect to geometry input, source description and detector point description which extend the flexibility and usefulness of the code, and which are especially useful in spent fuel shielding. MARMER has been validated using the TN12 spent fuel shipping cask benchmark. (author)

  10. MARMER, a flexible point-kernel shielding code

    Energy Technology Data Exchange (ETDEWEB)

    Kloosterman, J.L.; Hoogenboom, J.E. (Interuniversitair Reactor Inst., Delft (Netherlands))

    1990-01-01

    A point-kernel shielding code entitled MARMER is described. It has several options with respect to geometry input, source description and detector point description which extend the flexibility and usefulness of the code, and which are especially useful in spent fuel shielding. MARMER has been validated using the TN12 spent fuel shipping cask benchmark. (author).

  11. Mycological deterioration of stored palm kernels recovered from oil ...

    African Journals Online (AJOL)

    Palm kernels obtained from Pioneer Oil Mill Ltd. were stored for eight (8) weeks and examined for their microbiological quality and proximate composition. Seven (7) different fungal species were isolated by serial dilution plate technique. The fungal species included Aspergillus flavus Link; A nidulans Eidem; A niger ...

  12. Metabolite identification through multiple kernel learning on fragmentation trees.

    Science.gov (United States)

    Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho

    2014-06-15

    Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. © The Author 2014. Published by Oxford University Press.

  13. Notes on a storage manager for the Clouds kernel

    Science.gov (United States)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  14. On Convergence of Kernel Density Estimates in Particle Filtering

    Czech Academy of Sciences Publication Activity Database

    Coufal, David

    2016-01-01

    Roč. 52, č. 5 (2016), s. 735-756 ISSN 0023-5954 Grant - others:GA ČR(CZ) GA16-03708S; SVV(CZ) 260334/2016 Institutional support: RVO:67985807 Keywords : Fourier analysis * kernel methods * particle filter Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.379, year: 2016

  15. Screening of the kernels of Pentadesma butyracea from various ...

    African Journals Online (AJOL)

    Gwla10

    Joseph D. Hounhouigan. 2. 1Laboratoire de .... laboratory. Kernels were washed and dried at 45°C for 72 h before analysis. ... generated values allow calculating the various shape ... (LLYOD Instruments, USA) fit with a 0.42 cm thick blade with a triangular ... vacuum. Extraction was run in triplicate on germ, albumen and.

  16. Some engineering properties of shelled and kernel tea ( Camellia ...

    African Journals Online (AJOL)

    Some engineering properties (size dimensions, sphericity, volume, bulk and true densities, friction coefficient, colour characteristics and mechanical behaviour as rupture ... The static coefficients of friction of shelled and kernel tea seeds for the large and small sizes higher values for rubber than the other friction surfaces.

  17. PERI - auto-tuning memory-intensive kernels for multicore

    International Nuclear Information System (INIS)

    Williams, S; Carter, J; Oliker, L; Shalf, J; Yelick, K; Bailey, D; Datta, K

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4x improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications

  18. Deep sequencing of RNA from ancient maize kernels

    DEFF Research Database (Denmark)

    Fordyce, Sarah Louise; Avila Arcos, Maria del Carmen; Rasmussen, Morten

    2013-01-01

    The characterization of biomolecules from ancient samples can shed otherwise unobtainable insights into the past. Despite the fundamental role of transcriptomal change in evolution, the potential of ancient RNA remains unexploited - perhaps due to dogma associated with the fragility of RNA. We hy...... maize kernels. The results suggest that ancient seed transcriptomics may offer a powerful new tool with which to study plant domestication....

  19. Effect of Coconut ( cocus Nucifera ) and Palm Kernel ( eleasis ...

    African Journals Online (AJOL)

    Effect of Coconut ( cocus Nucifera ) and Palm Kernel ( eleasis Guinensis ) Oil Supplmented Diets on Serum Lipid Profile of Albino Wistar Rats. ... were fed normal rat pellet. At the end of the feeding period, animals were anaesthetized under chloroform vapor, dissected and blood obtained via cardiac puncture into tubes.

  20. Calculation of Volterra kernels for solutions of nonlinear differential equations

    NARCIS (Netherlands)

    van Hemmen, JL; Kistler, WM; Thomas, EGF

    2000-01-01

    We consider vector-valued autonomous differential equations of the form x' = f(x) + phi with analytic f and investigate the nonanticipative solution operator phi bar right arrow A(phi) in terms of its Volterra series. We show that Volterra kernels of order > 1 occurring in the series expansion of

  1. Moderate deviations principles for the kernel estimator of ...

    African Journals Online (AJOL)

    Abstract. The aim of this paper is to provide pointwise and uniform moderate deviations principles for the kernel estimator of a nonrandom regression function. Moreover, we give an application of these moderate deviations principles to the construction of condence regions for the regression function. Resume. L'objectif de ...

  2. Hollow microspheres with a tungsten carbide kernel for PEMFC application.

    Science.gov (United States)

    d'Arbigny, Julien Bernard; Taillades, Gilles; Marrony, Mathieu; Jones, Deborah J; Rozière, Jacques

    2011-07-28

    Tungsten carbide microspheres comprising an outer shell and a compact kernel prepared by a simple hydrothermal method exhibit very high surface area promoting a high dispersion of platinum nanoparticles, and an exceptionally high electrochemically active surface area (EAS) stability compared to the usual Pt/C electrocatalysts used for PEMFC application.

  3. Fractional quantum integral operator with general kernels and applications

    Science.gov (United States)

    Babakhani, Azizollah; Neamaty, Abdolali; Yadollahzadeh, Milad; Agahi, Hamzeh

    In this paper, we first introduce the concept of fractional quantum integral with general kernels, which generalizes several types of fractional integrals known from the literature. Then we give more general versions of some integral inequalities for this operator, thus generalizing some previous results obtained by many researchers.2,8,25,29,30,36

  4. Optimizing Multiple Kernel Learning for the Classification of UAV Data

    Directory of Open Access Journals (Sweden)

    Caroline M. Gevaert

    2016-12-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing high-quality orthoimagery and 3D information in the form of point clouds at a relatively low cost. Their increasing popularity stresses the necessity of understanding which algorithms are especially suited for processing the data obtained from UAVs. The features that are extracted from the point cloud and imagery have different statistical characteristics and can be considered as heterogeneous, which motivates the use of Multiple Kernel Learning (MKL for classification problems. In this paper, we illustrate the utility of applying MKL for the classification of heterogeneous features obtained from UAV data through a case study of an informal settlement in Kigali, Rwanda. Results indicate that MKL can achieve a classification accuracy of 90.6%, a 5.2% increase over a standard single-kernel Support Vector Machine (SVM. A comparison of seven MKL methods indicates that linearly-weighted kernel combinations based on simple heuristics are competitive with respect to computationally-complex, non-linear kernel combination methods. We further underline the importance of utilizing appropriate feature grouping strategies for MKL, which has not been directly addressed in the literature, and we propose a novel, automated feature grouping method that achieves a high classification accuracy for various MKL methods.

  5. Corruption clubs: empirical evidence from kernel density estimates

    NARCIS (Netherlands)

    Herzfeld, T.; Weiss, Ch.

    2007-01-01

    A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to

  6. A compact kernel for the calculus of inductive constructions

    Indian Academy of Sciences (India)

    CIC) implemented inside the Matita Interactive Theorem Prover. The design of the new kernel has been completely revisited since the first release, resulting in a remarkably compact implementation of about 2300 lines of OCaml code. The work ...

  7. Finite Gaussian Mixture Approximations to Analytically Intractable Density Kernels

    DEFF Research Database (Denmark)

    Khorunzhina, Natalia; Richard, Jean-Francois

    The objective of the paper is that of constructing finite Gaussian mixture approximations to analytically intractable density kernels. The proposed method is adaptive in that terms are added one at the time and the mixture is fully re-optimized at each step using a distance measure that approxima...

  8. Disinfection studies of Nahar (Mesua ferrea) seed kernel oil using ...

    African Journals Online (AJOL)

    GREGORY

    2011-12-16

    Dec 16, 2011 ... with a k value of -0.040. Key words: Nahar (Mesua ferrea) seed kernel oil, extraction, gum Arabic, disinfection, kinetics. INTRODUCTION. Disinfection plays a key role in the reclamation and reuse of wastewater for eliminating infectious diseases, this, in part, augments domestic water supply and decreases ...

  9. Improved Interpolation Kernels for Super-resolution Algorithms

    DEFF Research Database (Denmark)

    Rasti, Pejman; Orlova, Olga; Tamberg, Gert

    2016-01-01

    Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

  10. Briquetting of Palm Kernel Shell | Ugwu | Journal of Applied ...

    African Journals Online (AJOL)

    In several developing countries, briquettes from agricultural residues contribute significantly to the energy mix especially for small scale and household requirements. In this work, briquettes were produced from Palm kernel shell. This was achieved by carbonising the shell to get the charcoal followed by the pulverization of ...

  11. Controller synthesis for L2 behaviors using rational kernel representations

    NARCIS (Netherlands)

    Mutsaers, M.E.C.; Weiland, S.

    2008-01-01

    This paper considers the controller synthesis problem for the class of linear time-invariant L2 behaviors. We introduce classes of LTI L2 systems whose behavior can be represented as the kernel of a rational operator. Given a plant and a controlled system in this class, an algorithm is developed

  12. Recent sea level change analysed with kernel EOF

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Andersen, Ole Baltazar; Knudsen, Per

    2009-01-01

    -2008. Preliminary analysis shows some interesting features related to climate change and particularly the pulsing of the El Niño/Southern Oscillation. Large scale ocean events associated with the El Niño/Southern Oscillation related signals are conveniently concentrated in the first SSH kernel EOF modes....

  13. Polynomial kernels for deletion to classes of acyclic digraphs

    NARCIS (Netherlands)

    Mnich, Matthias; van Leeuwen, E.J.

    2017-01-01

    We consider the problem to find a set X of vertices (or arcs) with |X| ≤ k in a given digraph G such that D = G − X is an acyclic digraph. In its generality, this is Directed Feedback Vertex Set (or Directed Feedback Arc Set); the existence of a polynomial kernel for these problems is a notorious

  14. Nutritional evaluation of fermented palm kernel cake using red tilapia

    African Journals Online (AJOL)

    The use of palm kernel cake (PKC) and other plant residues in fish feeding especially under extensive aquaculture have been in practice for a long time. On the other hand, the use of microbial-based feedstuff is increasing. In this study, the performance of red tilapia raised on Trichoderma longibrachiatum fermented PKC ...

  15. Preparation and characterization of active carbon using palm kernel ...

    African Journals Online (AJOL)

    Activated carbons were prepared from Palm kernel shells. Carbonization temperature was 6000C, at a residence time of 5 min for each process. Chemical activation was done by heating a mixture of carbonized material and the activating agents at a temperature of 700C to form a paste, followed by subsequent cooling and ...

  16. Matrix kernels for MEG and EEG source localization and imaging

    International Nuclear Information System (INIS)

    Mosher, J.C.; Lewis, P.S.; Leahy, R.M.

    1994-01-01

    The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell's equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ''gain'' or ''transfer'' matrices used in multiple dipole and source imaging models

  17. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  18. An Adaptive Genetic Association Test Using Double Kernel Machines.

    Science.gov (United States)

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  19. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    Science.gov (United States)

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  20. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    Science.gov (United States)

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.