Atomic Interferometry with Detuned Counter-Propagating Electromagnetic Pulses
Energy Technology Data Exchange (ETDEWEB)
Tsang, Ming -Yee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-09-05
Atomic fountain interferometry uses atoms cooled with optical molasses to 1 μK, which are then launched in a fountain mode. The interferometer relies on the nonlinear Raman interaction of counter-propagating visible light pulses. We present models of these key transitions through a series of Hamiltonians. Our models, which have been verified against special cases with known solutions, allow us to incorporate the effects of non-ideal pulse shapes and realistic laser frequency or wavevector jitter.
Real-time reconfigurable counter-propagating beam-traps
DEFF Research Database (Denmark)
Tauro, Sandeep; Bañas, Andrew Rafael; Palima, Darwin
2010-01-01
We present a versatile technique that enhances the axial stability and range in counter-propagating (CP) beam-geometry optical traps. It is based on computer vision to track objects in unison with software implementation of feedback to stabilize particles. In this paper, we experimentally...... which simulates biosamples. By working on differences rather than absolute values, this feedback based technique makes CPtrapping nullify many of the commonly encountered pertubations such as fluctuations in the laser power, vibrations due to mechanical instabilities and other distortions emphasizing...
One loop partition function of six dimensional conformal gravity using heat kernel on AdS
Energy Technology Data Exchange (ETDEWEB)
Lovreković, Iva [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstrasse 8-10/136, A-1040 Vienna (Austria)
2016-10-13
We compute the heat kernel for the Laplacians of symmetric transverse traceless fields of arbitrary spin on the AdS background in even number of dimensions using the group theoretic approach introduced in http://dx.doi.org/10.1007/JHEP11(2011)010 and apply it on the partition function of six dimensional conformal gravity. The obtained partition function consists of the Einstein gravity, conformal ghost and two modes that contain mass.
Alternative modes for optical trapping and manipulation using counter-propagating shaped beams
International Nuclear Information System (INIS)
Palima, D; Tauro, S; Glückstad, J; Lindballe, T B; Kristensen, M V; Stapelfeldt, H; Keiding, S R
2011-01-01
Counter-propagating beams have enabled the first stable three-dimensional optical trapping of microparticles and this procedure has been enhanced and developed over the years to achieve independent and interactive manipulation of multiple particles. In this work, we analyse counter-propagating shaped-beam traps that depart from the conventional geometry based on symmetric, coaxial counter-propagating beams. We show that projecting shaped beams with separation distances previously considered axially unstable can, in fact, enhance the axial and transverse trapping stiffnesses. We also show that deviating from using perfectly counter-propagating beams to use oblique beams can improve the axial stability of the traps and improve the axial trapping stiffness. These alternative geometries can be particularly useful for handling larger particles. These results hint at a rich potential for light shaping for optical trapping and manipulation using patterned counter-propagating beams, which still remains to be fully tapped
International Nuclear Information System (INIS)
Shvets, G.; Fisch, N.J.; Pukhov, A.
1999-01-01
The interaction of counter-propagating laser pulses in a plasma is considered. When the frequencies of the two lasers are close, nonlinear modification of the refraction index results in the mutual focusing of the two beams. A short (of order the plasma period) laser pulse can be nonlinearly focused by a long counter-propagating beam which extends over the entire guiding length. It is also demonstrated that a short ( p ) laser pulse can be superradiantly amplified by a counter-propagating long low-intensity pump while remaining ultra-short. Particle-in-Cell simulations indicate that pump depletion can be as high as 40%. This implies that the long pump is efficiently compressed in time without frequency chirping and pulse stretching, making the superradiant amplification an interesting alternative to the conventional method of producing ultra-intense pulses by the chirped-pulse amplification. copyright 1999 American Institute of Physics
Parametric Excitations of Fast Plasma Waves by Counter-propagating Laser Beams
International Nuclear Information System (INIS)
Shvets, G.; Fisch, N.J.
2001-01-01
Short- and long-wavelength plasma waves can become strongly coupled in the presence of two counter-propagating laser pump pulses detuned by twice the cold plasma frequency. What makes this four-wave interaction important is that the growth rate of the plasma waves occurs much faster than in the more obvious co-propagating geometry
The optics of gyrotropic crystals in the field of two counter-propagating ultrasound waves
International Nuclear Information System (INIS)
Gevorgyan, A H; Harutyunyan, E M; Hovhannisyan, M A; Matinyan, G K
2014-01-01
We consider oblique light propagation through a layer of a gyrotropic crystal in the field of two counter-propagating ultrasound waves. The problem is solved by Ambartsumyan's layer addition modified method. The results of the reflection spectra for different values of the problem parameters are presented. The possibilities of such system applications are discussed.
Quantum Einstein gravity. Advancements of heat kernel-based renormalization group studies
Energy Technology Data Exchange (ETDEWEB)
Groh, Kai
2012-10-15
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory. As its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained. The constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point. Finally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement
Quantum Einstein gravity. Advancements of heat kernel-based renormalization group studies
International Nuclear Information System (INIS)
Groh, Kai
2012-10-01
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory. As its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained. The constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point. Finally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement of
Counter-propagating wave interaction for contrast-enhanced ultrasound imaging
Renaud, G.; Bosch, J. G.; ten Kate, G. L.; Shamdasani, V.; Entrekin, R.; de Jong, N.; van der Steen, A. F. W.
2012-11-01
Most techniques for contrast-enhanced ultrasound imaging require linear propagation to detect nonlinear scattering of contrast agent microbubbles. Waveform distortion due to nonlinear propagation impairs their ability to distinguish microbubbles from tissue. As a result, tissue can be misclassified as microbubbles, and contrast agent concentration can be overestimated; therefore, these artifacts can significantly impair the quality of medical diagnoses. Contrary to biological tissue, lipid-coated gas microbubbles used as a contrast agent allow the interaction of two acoustic waves propagating in opposite directions (counter-propagation). Based on that principle, we describe a strategy to detect microbubbles that is free from nonlinear propagation artifacts. In vitro images were acquired with an ultrasound scanner in a phantom of tissue-mimicking material with a cavity containing a contrast agent. Unlike the default mode of the scanner using amplitude modulation to detect microbubbles, the pulse sequence exploiting counter-propagating wave interaction creates no pseudoenhancement behind the cavity in the contrast image.
Energy Technology Data Exchange (ETDEWEB)
He, Jiansen; Tu, Chuanyi; Wang, Linghua; Pei, Zhongtian [School of Earth and Space Sciences, Peking University, Beijing, 100871 (China); Marsch, Eckart [Institute for Experimental and Applied Physics, Christian-Albrechts-Universität zu Kiel, D-24118 Kiel (Germany); Chen, Christopher H. K. [Department of Physics, Imperial College London, London SW7 2AZ (United Kingdom); Zhang, Lei [Sate Key Laboratory of Space Weather, Chinese Academy of Sciences, Beijing 100190 (China); Salem, Chadi S.; Bale, Stuart D., E-mail: jshept@gmail.com [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States)
2015-11-10
Magnetohydronamic turbulence is believed to play a crucial role in heating laboratory, space, and astrophysical plasmas. However, the precise connection between the turbulent fluctuations and the particle kinetics has not yet been established. Here we present clear evidence of plasma turbulence heating based on diagnosed wave features and proton velocity distributions from solar wind measurements by the Wind spacecraft. For the first time, we can report the simultaneous observation of counter-propagating magnetohydrodynamic waves in the solar wind turbulence. As opposed to the traditional paradigm with counter-propagating Alfvén waves (AWs), anti-sunward AWs are encountered by sunward slow magnetosonic waves (SMWs) in this new type of solar wind compressible turbulence. The counter-propagating AWs and SWs correspond, respectively, to the dominant and sub-dominant populations of the imbalanced Elsässer variables. Nonlinear interactions between the AWs and SMWs are inferred from the non-orthogonality between the possible oscillation direction of one wave and the possible propagation direction of the other. The associated protons are revealed to exhibit bi-directional asymmetric beams in their velocity distributions: sunward beams appear in short, narrow patterns and anti-sunward in broad extended tails. It is suggested that multiple types of wave–particle interactions, i.e., cyclotron and Landau resonances with AWs and SMWs at kinetic scales, are taking place to jointly heat the protons perpendicular and in parallel.
Excitation of accelerating plasma waves by counter-propagating laser beams
International Nuclear Information System (INIS)
Shvets, Gennady; Fisch, Nathaniel J.; Pukhov, Alexander
2002-01-01
The conventional approach to exciting high phase velocity waves in plasmas is to employ a laser pulse moving in the direction of the desired particle acceleration. Photon downshifting then causes momentum transfer to the plasma and wave excitation. Novel approaches to plasma wake excitation, colliding-beam accelerator (CBA), which involve photon exchange between the long and short counter-propagating laser beams, are described. Depending on the frequency detuning Δω between beams and duration τ L of the short pulse, there are two approaches to CBA. First approach assumes (τ L ≅2/ω p ). Photons exchanged between the beams deposit their recoil momentum in the plasma driving the plasma wake. Frequency detuning between the beams determines the direction of the photon exchange, thereby controlling the phase of the plasma wake. This phase control can be used for reversing the slippage of the accelerated particles with respect to the wake. A variation on the same theme, super-beatwave accelerator, is also described. In the second approach, a short pulse with τ L >>ω p -1 detuned by Δω∼2ω p from the counter-propagating beam is employed. While parametric excitation of plasma waves by the electromagnetic beatwave at 2ω p of two co-propagating lasers was first predicted by Rosenbluth and Liu [M. N. Rosenbluth and C. S. Liu, Phys. Rev. Lett. 29, 701 (1972)], it is demonstrated that the two excitation beams can be counter-propagating. The advantages of using this geometry (higher instability growth rate, insensitivity to plasma inhomogeneity) are explained, and supporting numerical simulations presented
Excitation of Accelerating Plasma Waves by Counter-propagating Laser Beams
International Nuclear Information System (INIS)
Gennady Shvets; Nathaniel J. Fisch; Alexander Pukhov
2001-01-01
Generation of accelerating plasma waves using two counter-propagating laser beams is considered. Colliding-beam accelerator requires two laser pulses: the long pump and the short timing beam. We emphasize the similarities and differences between the conventional laser wakefield accelerator and the colliding-beam accelerator (CBA). The highly nonlinear nature of the wake excitation is explained using both nonlinear optics and plasma physics concepts. Two regimes of CBA are considered: (i) the short-pulse regime, where the timing beam is shorter than the plasma period, and (ii) the parametric excitation regime, where the timing beam is longer than the plasma period. Possible future experiments are also outlined
Counter-propagating wave interaction for contrast-enhanced ultrasound imaging
International Nuclear Information System (INIS)
Renaud, G; Bosch, J G; Ten Kate, G L; De Jong, N; Van der Steen, A F W; Shamdasani, V; Entrekin, R
2012-01-01
Most techniques for contrast-enhanced ultrasound imaging require linear propagation to detect nonlinear scattering of contrast agent microbubbles. Waveform distortion due to nonlinear propagation impairs their ability to distinguish microbubbles from tissue. As a result, tissue can be misclassified as microbubbles, and contrast agent concentration can be overestimated; therefore, these artifacts can significantly impair the quality of medical diagnoses. Contrary to biological tissue, lipid-coated gas microbubbles used as a contrast agent allow the interaction of two acoustic waves propagating in opposite directions (counter-propagation). Based on that principle, we describe a strategy to detect microbubbles that is free from nonlinear propagation artifacts. In vitro images were acquired with an ultrasound scanner in a phantom of tissue-mimicking material with a cavity containing a contrast agent. Unlike the default mode of the scanner using amplitude modulation to detect microbubbles, the pulse sequence exploiting counter-propagating wave interaction creates no pseudoenhancement behind the cavity in the contrast image. (fast track communication)
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems.
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S; Agarwal, Dev P
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data.
Zhao, Chenglong; LeBrun, Thomas W.
2015-08-01
Gold nanoparticles (GNP) have wide applications ranging from nanoscale heating to cancer therapy and biological sensing. Optical trapping of GNPs as small as 18 nm has been successfully achieved with laser power as high as 855 mW, but such high powers can damage trapped particles (particularly biological systems) as well heat the fluid, thereby destabilizing the trap. In this article, we show that counter propagating beams (CPB) can successfully trap GNP with laser powers reduced by a factor of 50 compared to that with a single beam. The trapping position of a GNP inside a counter-propagating trap can be easily modulated by either changing the relative power or position of the two beams. Furthermore, we find that under our conditions while a single-beam most stably traps a single particle, the counter-propagating beam can more easily trap multiple particles. This (CPB) trap is compatible with the feedback control system we recently demonstrated to increase the trapping lifetimes of nanoparticles by more than an order of magnitude. Thus, we believe that the future development of advanced trapping techniques combining counter-propagating traps together with control systems should significantly extend the capabilities of optical manipulation of nanoparticles for prototyping and testing 3D nanodevices and bio-sensing.
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems
Directory of Open Access Journals (Sweden)
Vandana Sakhre
2015-01-01
Full Text Available Fuzzy Counter Propagation Neural Network (FCPN controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL. FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN and Back Propagation Network (BPN on the basis of Mean Absolute Error (MAE, Mean Square Error (MSE, Best Fit Rate (BFR, and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO and a single input and single output (SISO gas furnace Box-Jenkins time series data.
Destabilization of counter-propagating TAEs by off-axis, co-current Neutral Beam Injection
Podesta', M.; Fredrickson, E.; Gorelenkova, M.
2017-10-01
Neutral Beam injection (NBI) is a common tool to heat the plasma and drive current non-inductively in fusion devices. Energetic particles (EP) resulting from NBI can drive instabilities that are detrimental for the performance and the predictability of plasma discharges. A broad NBI deposition profile, e.g. by off-axis injection aiming near the plasma mid-radius, is often assumed to limit those undesired effects by reducing the radial gradient of the EP density, thus reducing the ``universal'' drive for instabilities. However, this work presents new evidence that off-axis NBI can also lead to undesired effects such as the destabilization of Alfvénic instabilities, as observed in NSTX-U plasmas. Experimental observations indicate that counter propagating toroidal AEs are destabilized as the radial EP density profile becomes hollow as a result of off-axis NBI. Time-dependent analysis with the TRANSP code, augmented by a reduced fast ion transport model (known as kick model), indicates that instabilities are driven by a combination of radial and energy gradients in the EP distribution. Understanding the mechanisms for wave-particle interaction, revealed by the phase space resolved analysis, is the basis to identify strategies to mitigate or suppress the observed instabilities. Work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences under Contract Number DE-AC02-09CH11466.
Directional nonlinear guided wave mixing: Case study of counter-propagating shear horizontal waves
Hasanian, Mostafa; Lissenden, Cliff J.
2018-04-01
While much nonlinear ultrasonics research has been conducted on higher harmonic generation, wave mixing provides the potential for sensitive measurements of incipient damage unencumbered by instrumentation nonlinearity. Studies of nonlinear ultrasonic wave mixing, both collinear and noncollinear, for bulk waves have shown the robust capability of wave mixing for early damage detection. One merit of bulk wave mixing lies in their non-dispersive nature, but guided waves enable inspection of otherwise inaccessible material and a variety of mixing options. Co-directional guided wave mixing was studied previously, but arbitrary direction guided wave mixing has not been addressed until recently. Wave vector analysis is applied to study variable mixing angles to find wave mode triplets (two primary waves and a secondary wave) resulting in the phase matching condition. As a case study, counter-propagating Shear Horizontal (SH) guided wave mixing is analyzed. SH wave interactions generate a secondary Lamb wave mode that is readily receivable. Reception of the secondary Lamb wave mode is compared for an angle beam transducer, an air coupled transducer, and a laser Doppler vibrometer (LDV). Results from the angle beam and air coupled transducers are quite consistent, while the LDV measurement is plagued by variability issues.
Counter-propagating dual-trap optical tweezers based on linear momentum conservation
International Nuclear Information System (INIS)
Ribezzi-Crivellari, M.; Huguet, J. M.; Ritort, F.
2013-01-01
We present a dual-trap optical tweezers setup which directly measures forces using linear momentum conservation. The setup uses a counter-propagating geometry, which allows momentum measurement on each beam separately. The experimental advantages of this setup include low drift due to all-optical manipulation, and a robust calibration (independent of the features of the trapped object or buffer medium) due to the force measurement method. Although this design does not attain the high-resolution of some co-propagating setups, we show that it can be used to perform different single molecule measurements: fluctuation-based molecular stiffness characterization at different forces and hopping experiments on molecular hairpins. Remarkably, in our setup it is possible to manipulate very short tethers (such as molecular hairpins with short handles) down to the limit where beads are almost in contact. The setup is used to illustrate a novel method for measuring the stiffness of optical traps and tethers on the basis of equilibrium force fluctuations, i.e., without the need of measuring the force vs molecular extension curve. This method is of general interest for dual trap optical tweezers setups and can be extended to setups which do not directly measure forces.
Counter-propagating dual-trap optical tweezers based on linear momentum conservation
Energy Technology Data Exchange (ETDEWEB)
Ribezzi-Crivellari, M.; Huguet, J. M. [Small Biosystems Lab, Dept. de Fisica Fonamental, Universitat de Barcelona, Avda. Diagonal 647, 08028 Barcelona (Spain); Ritort, F. [Small Biosystems Lab, Dept. de Fisica Fonamental, Universitat de Barcelona, Avda. Diagonal 647, 08028 Barcelona (Spain); Ciber-BBN de Bioingenieria, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Madrid (Spain)
2013-04-15
We present a dual-trap optical tweezers setup which directly measures forces using linear momentum conservation. The setup uses a counter-propagating geometry, which allows momentum measurement on each beam separately. The experimental advantages of this setup include low drift due to all-optical manipulation, and a robust calibration (independent of the features of the trapped object or buffer medium) due to the force measurement method. Although this design does not attain the high-resolution of some co-propagating setups, we show that it can be used to perform different single molecule measurements: fluctuation-based molecular stiffness characterization at different forces and hopping experiments on molecular hairpins. Remarkably, in our setup it is possible to manipulate very short tethers (such as molecular hairpins with short handles) down to the limit where beads are almost in contact. The setup is used to illustrate a novel method for measuring the stiffness of optical traps and tethers on the basis of equilibrium force fluctuations, i.e., without the need of measuring the force vs molecular extension curve. This method is of general interest for dual trap optical tweezers setups and can be extended to setups which do not directly measure forces.
Qiu, Wei; Liu, Jianjun; Wang, Yuda; Yang, Yujing; Gao, Yuan; Lv, Pin; Jiang, Qiuli
2018-04-01
In this paper, a general theory of coherent population oscillation effect in an Er3+ -doped fiber under the dual-frequency pumping laser with counter-propagation and co-propagation at room temperature is presented. Using the numerical simulation, in case of dual frequency light waves (1480 nm and 980 nm) with co-propagation and counter-propagation, we analyze the effect of the pump optical power ratio (M) on the group speed of light. The group velocity of light can be varied with the change of M. We research the time delay and fractional delay in an Er3+-doped fiber under the dual-frequency pumping laser with counter-propagation and co-propagation. Compared to the methods of the single pumping, the larger time delay can be got by using the technique of dual-frequency laser pumped fiber with co-propagation and counter-propagation.
Maes, C; Asbóth, J K; Ritsch, H
2007-05-14
We study the dynamics of a fast gaseous beam in a high Q ring cavity counter propagating a strong pump laser with large detuning from any particle optical resonance. As spontaneous emission is strongly suppressed the particles can be treated as polarizable point masses forming a dynamic moving mirror. Above a threshold intensity the particles exhibit spatial periodic ordering enhancing collective coherent backscattering which decelerates the beam. Based on a linear stability analysis in their accelerated rest frame we derive analytic bounds for the intensity threshold of this selforganization as a function of particle number, average velocity, kinetic temperature, pump detuning and resonator linewidth. The analytical results agree well with time dependent simulations of the N-particle motion including field damping and spontaneous emission noise. Our results give conditions which may be easily evaluated for stopping and cooling a fast molecular beam.
Ikuta, Rikizo; Nozaki, Shota; Yamamoto, Takashi; Koashi, Masato; Imoto, Nobuyuki
2017-07-06
Embedding a quantum state in a decoherence-free subspace (DFS) formed by multiple photons is one of the promising methods for robust entanglement distribution of photonic states over collective noisy channels. In practice, however, such a scheme suffers from a low efficiency proportional to transmittance of the channel to the power of the number of photons forming the DFS. The use of a counter-propagating coherent pulse can improve the efficiency to scale linearly in the channel transmission, but it achieves only protection against phase noises. Recently, it was theoretically proposed [Phys. Rev. A 87, 052325(2013)] that the protection against bit-flip noises can also be achieved if the channel has a reciprocal property. Here we experimentally demonstrate the proposed scheme to distribute polarization-entangled photon pairs against a general collective noise including the bit flip noise and the phase noise. We observed an efficient sharing rate scaling while keeping a high quality of the distributed entangled state. Furthermore, we show that the method is applicable not only to the entanglement distribution but also to the transmission of arbitrary polarization states of a single photon.
DEFF Research Database (Denmark)
Kristensen, M. V.; Lindballe, T.; Kylling, A.
2010-01-01
An experimental characterization of the 3D forces, acting on a trapped polystyrene bead in a counter-propagating beam geometry, is reported. Using a single optical trap with a large working distance (in the BioPhotonics Workstation), we simultaneously measure the transverse and longitudinal...... trapping force constants. Two different methods were used: The Drag force method and the Equipartition method. We show that the counterpropagating beams traps are simple harmonic for small displacements. The force constants reveal a transverse asymmetry as - = 9.7 pN/µm and + = 11.3 pN/µm (at a total laser...... power of 2x35 mW) for displacements in opposite directions. The Equipartition method is limited by mechanical noise and is shown to be applicable only when the total laser power in a single 10 µm counter-propagating trap is below 2x20 mW....
Gamow, George
2003-01-01
A distinguished physicist and teacher, George Gamow also possessed a special gift for making the intricacies of science accessible to a wide audience. In Gravity, he takes an enlightening look at three of the towering figures of science who unlocked many of the mysteries behind the laws of physics: Galileo, the first to take a close look at the process of free and restricted fall; Newton, originator of the concept of gravity as a universal force; and Einstein, who proposed that gravity is no more than the curvature of the four-dimensional space-time continuum.Graced with the author's own draw
Indian Academy of Sciences (India)
We study the cosmological dynamics for R p exp( λ R ) gravity theory in the metric formalism, using dynamical systems approach. Considering higher-dimensional FRW geometries in case of an imperfect fluid which has two different scale factors in the normal and extra dimensions, we find the exact solutions, and study its ...
International Nuclear Information System (INIS)
Bokhan, P. A.; Gugin, P. P.; Lavrukhin, M. A.; Zakrevsky, Dm. E.
2015-01-01
The switching rate of gas-discharge devices “kivotrons” based on the open discharge with counter-propagating electron beams has been experimentally studied. Structures with 2-cm 2 overall cathode area were examined. The switching time was found to show a monotonic decrease with increasing the working-gas helium pressure and with increasing the voltage across the discharge gap at breakdown. The minimum switching time was found to be ∼240 ps at 17 kV voltage, and the maximum rate of electric-current rise limited by the discharge-circuit inductance was 3 × 10 12 A/s
Morosi, J; Berti, N; Akrout, A; Picozzi, A; Guasoni, M; Fatome, J
2018-01-22
In this manuscript, we experimentally and numerically investigate the chaotic dynamics of the state-of-polarization in a nonlinear optical fiber due to the cross-interaction between an incident signal and its intense backward replica generated at the fiber-end through an amplified reflective delayed loop. Thanks to the cross-polarization interaction between the two-delayed counter-propagating waves, the output polarization exhibits fast temporal chaotic dynamics, which enable a powerful scrambling process with moving speeds up to 600-krad/s. The performance of this all-optical scrambler was then evaluated on a 10-Gbit/s On/Off Keying telecom signal achieving an error-free transmission. We also describe how these temporal and chaotic polarization fluctuations can be exploited as an all-optical random number generator. To this aim, a billion-bit sequence was experimentally generated and successfully confronted to the dieharder benchmarking statistic tools. Our experimental analysis are supported by numerical simulations based on the resolution of counter-propagating coupled nonlinear propagation equations that confirm the observed behaviors.
Del Sorbo, D.; Seipt, D.; Thomas, A. G. R.; Ridgers, C. P.
2018-06-01
It has recently been suggested that two counter-propagating, circularly polarized, ultra-intense lasers can induce a strong electron spin polarization at the magnetic node of the electromagnetic field that they setup (Del Sorbo et al 2017 Phys. Rev. A 96 043407). We confirm these results by considering a more sophisticated description that integrates over realistic trajectories. The electron dynamics is weakly affected by the variation of power radiated due to the spin polarization. The degree of spin polarization differs by approximately 5% if considering electrons initially at rest or already in a circular orbit. The instability of trajectories at the magnetic node induces a spin precession associated with the electron migration that establishes an upper temporal limit to the polarization of the electron population of about one laser period.
Anchal, Abhishek; K, Pradeep Kumar; O'Duill, Sean; Anandarajah, Prince M.; Landais, Pascal
2018-04-01
We present a scheme of frequency-degenerate mid-span spectral inversion (MSSI) for nonlinearity compensation in fiber-optic transmission systems. The spectral inversion is obtained by using counter-propagating dual pump four-wave mixing in a semiconductor optical amplifier (SOA). Frequency-degeneracy between signal and conjugate is achieved by keeping two pump frequencies symmetrical about the signal frequency. We simulate the performance of MSSI for nonlinearity compensation by scrutinizing the improvement of the Q-factor of a 200 Gbps QPSK signal transmitted over a standard single mode fiber, as a function of launch power for different span lengths and number of spans. We demonstrate a 7.5 dB improvement in the input power dynamic range and an almost 83% increase in the transmission length for optimum MSSI parameters of -2 dBm pump power and 400 mA SOA current.
International Nuclear Information System (INIS)
Merritt, E. C.; Doss, F. W.; Loomis, E. N.; Flippo, K. A.; Kline, J. L.
2015-01-01
Counter-propagating shear experiments conducted at the OMEGA Laser Facility have been evaluating the effect of target initial conditions, specifically the characteristics of a tracer foil located at the shear boundary, on Kelvin-Helmholtz instability evolution and experiment transition toward nonlinearity and turbulence in the high-energy-density (HED) regime. Experiments are focused on both identifying and uncoupling the dependence of the model initial turbulent length scale in variable-density turbulence models of k-ϵ type on competing physical instability seed lengths as well as developing a path toward fully developed turbulent HED experiments. We present results from a series of experiments controllably and independently varying two initial types of scale lengths in the experiment: the thickness and surface roughness (surface perturbation scale spectrum) of a tracer layer at the shear interface. We show that decreasing the layer thickness and increasing the surface roughness both have the ability to increase the relative mixing in the system, and thus theoretically decrease the time required to begin transitioning to turbulence in the system. We also show that we can connect a change in observed mix width growth due to increased foil surface roughness to an analytically predicted change in model initial turbulent scale lengths
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2016-01-01
To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...
Energy Technology Data Exchange (ETDEWEB)
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Gärtner, Thomas
2009-01-01
This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by
Locally linear approximation for Kernel methods : the Railway Kernel
Muñoz, Alberto; González, Javier
2008-01-01
In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...
Motai, Yuichi
2015-01-01
Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger
2009-01-01
and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...
Adaptive metric kernel regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
2000-01-01
Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...
Adaptive Metric Kernel Regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...
Kernel methods for deep learning
Cho, Youngmin
2012-01-01
We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads
2011-01-01
In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...
Spafford, Eugene H.; Mckendry, Martin S.
1986-01-01
An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.
Stochastic Gravity: Theory and Applications
Directory of Open Access Journals (Sweden)
Hu Bei Lok
2008-05-01
Full Text Available Whereas semiclassical gravity is based on the semiclassical Einstein equation with sources given by the expectation value of the stress-energy tensor of quantum fields, stochastic semiclassical gravity is based on the Einstein–Langevin equation, which has, in addition, sources due to the noise kernel. The noise kernel is the vacuum expectation value of the (operator-valued stress-energy bitensor, which describes the fluctuations of quantum-matter fields in curved spacetimes. A new improved criterion for the validity of semiclassical gravity may also be formulated from the viewpoint of this theory. In the first part of this review we describe the fundamentals of this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the stress-energy tensor to the correlation functions. The functional approach uses the Feynman–Vernon influence functional and the Schwinger–Keldysh closed-time-path effective action methods. In the second part, we describe three applications of stochastic gravity. First, we consider metric perturbations in a Minkowski spacetime, compute the two-point correlation functions of these perturbations and prove that Minkowski spacetime is a stable solution of semiclassical gravity. Second, we discuss structure formation from the stochastic-gravity viewpoint, which can go beyond the standard treatment by incorporating the full quantum effect of the inflaton fluctuations. Third, using the Einstein–Langevin equation, we discuss the backreaction of Hawking radiation and the behavior of metric fluctuations for both the quasi-equilibrium condition of a black-hole in a box and the fully nonequilibrium condition of an evaporating black hole spacetime. Finally, we briefly discuss the theoretical structure of stochastic gravity in relation to quantum gravity and point out
Viscosity kernel of molecular fluids
DEFF Research Database (Denmark)
Puscasu, Ruslan; Todd, Billy; Daivis, Peter
2010-01-01
, temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Steerability of Hermite Kernel
Czech Academy of Sciences Publication Activity Database
Yang, Bo; Flusser, Jan; Suk, Tomáš
2013-01-01
Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
Smolka, Gert
1994-01-01
Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
7 CFR 981.408 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...
7 CFR 981.8 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
de Rham, Claudia
2014-01-01
We review recent progress in massive gravity. We start by showing how different theories of massive gravity emerge from a higher-dimensional theory of general relativity, leading to the Dvali–Gabadadze–Porrati model (DGP), cascading gravity, and ghost-free massive gravity. We then explore their theoretical and phenomenological consistency, proving the absence of Boulware–Deser ghosts and reviewing the Vainshtein mechanism and the cosmological solutions in these models. Finally, we present alt...
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Atom interferometric gravity gradiometer: Disturbance compensation and mobile gradiometry
Mahadeswaraswamy, Chetan
First ever mobile gravity gradient measurement based on Atom Interferometric sensors has been demonstrated. Mobile gravity gradiometers play a significant role in high accuracy inertial navigation systems in order to distinguish inertial acceleration and acceleration due to gravity. The gravity gradiometer consists of two atom interferometric accelerometers. In each of the accelerometer an ensemble of laser cooled Cesium atoms is dropped and using counter propagating Raman pulses (pi/2-pi-pi/2) the ensemble is split into two states for carrying out atom interferometry. The interferometer phase is proportional to the specific force experienced by the atoms which is a combination of inertial acceleration and acceleration due to gravity. The difference in phase between the two atom interferometric sensors is proportional to gravity gradient if the platform does not undergo any rotational motion. However, any rotational motion of the platform induces spurious gravity gradient measurements. This apparent gravity gradient due to platform rotation is considerably different for an atom interferometric sensor compared to a conventional force rebalance type sensor. The atoms are in free fall and are not influenced by the motion of the case except at the instants of Raman pulses. A model for determining apparent gravity gradient due to rotation of platform was developed and experimentally verified for different frequencies. This transfer function measurement also lead to the development of a new technique for aligning the Raman laser beams with the atom clusters to within 20 mu rad. This gravity gradiometer is situated in a truck for the purpose of undertaking mobile surveys. A disturbance compensation system was designed and built in order to compensate for the rotational disturbances experienced on the floor of a truck. An electric drive system was also designed specifically to be able to move the truck in a uniform motion at very low speeds of about 1cm/s. A 250 x10-9 s-2
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Mixture Density Mercer Kernels: A Method to Learn Kernels
National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...
A kernel version of spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
. Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
kernel oil by lipolytic organisms
African Journals Online (AJOL)
USER
2010-08-02
Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.
Multivariate and semiparametric kernel regression
Härdle, Wolfgang; Müller, Marlene
1997-01-01
The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole E.
The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....
Mashhoon, Bahram
2017-01-01
Relativity theory is based on a postulate of locality, which means that the past history of the observer is not directly taken into account. This book argues that the past history should be taken into account. In this way, nonlocality---in the sense of history dependence---is introduced into relativity theory. The deep connection between inertia and gravitation suggests that gravity could be nonlocal, and in nonlocal gravity the fading gravitational memory of past events must then be taken into account. Along this line of thought, a classical nonlocal generalization of Einstein's theory of gravitation has recently been developed. A significant consequence of this theory is that the nonlocal aspect of gravity appears to simulate dark matter. According to nonlocal gravity theory, what astronomers attribute to dark matter should instead be due to the nonlocality of gravitation. Nonlocality dominates on the scale of galaxies and beyond. Memory fades with time; therefore, the nonlocal aspect of gravity becomes wea...
Nutrition quality of extraction mannan residue from palm kernel cake on brolier chicken
Tafsin, M.; Hanafi, N. D.; Kejora, E.; Yusraini, E.
2018-02-01
This study aims to find out the nutrient residue of palm kernel cake from mannan extraction on broiler chicken by evaluating physical quality (specific gravity, bulk density and compacted bulk density), chemical quality (proximate analysis and Van Soest Test) and biological test (metabolizable energy). Treatment composed of T0 : palm kernel cake extracted aquadest (control), T1 : palm kernel cake extracted acetic acid (CH3COOH) 1%, T2 : palm kernel cake extracted aquadest + mannanase enzyme 100 u/l and T3 : palm kernel cake extracted acetic acid (CH3COOH) 1% + enzyme mannanase 100 u/l. The results showed that mannan extraction had significant effect (P<0.05) in improving the quality of physical and numerically increase the value of crude protein and decrease the value of NDF (Neutral Detergent Fiber). Treatments had highly significant influence (P<0.01) on the metabolizable energy value of palm kernel cake residue in broiler chickens. It can be concluded that extraction with aquadest + enzyme mannanase 100 u/l yields the best nutrient quality of palm kernel cake residue for broiler chicken.
Influence Function and Robust Variant of Kernel Canonical Correlation Analysis
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2017-01-01
Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...
Kernel versions of some orthogonal transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Model Selection in Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter
Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...
Massive gravity from bimetric gravity
International Nuclear Information System (INIS)
Baccetti, Valentina; Martín-Moruno, Prado; Visser, Matt
2013-01-01
We discuss the subtle relationship between massive gravity and bimetric gravity, focusing particularly on the manner in which massive gravity may be viewed as a suitable limit of bimetric gravity. The limiting procedure is more delicate than currently appreciated. Specifically, this limiting procedure should not unnecessarily constrain the background metric, which must be externally specified by the theory of massive gravity itself. The fact that in bimetric theories one always has two sets of metric equations of motion continues to have an effect even in the massive gravity limit, leading to additional constraints besides the one set of equations of motion naively expected. Thus, since solutions of bimetric gravity in the limit of vanishing kinetic term are also solutions of massive gravity, but the contrary statement is not necessarily true, there is no complete continuity in the parameter space of the theory. In particular, we study the massive cosmological solutions which are continuous in the parameter space, showing that many interesting cosmologies belong to this class. (paper)
Integral equations with contrasting kernels
Directory of Open Access Journals (Sweden)
Theodore Burton
2008-01-01
Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.
Kernel learning algorithms for face recognition
Li, Jun-Bao; Pan, Jeng-Shyang
2013-01-01
Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new
Model selection for Gaussian kernel PCA denoising
DEFF Research Database (Denmark)
Jørgensen, Kasper Winther; Hansen, Lars Kai
2012-01-01
We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...
RTOS kernel in portable electrocardiograph
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
RTOS kernel in portable electrocardiograph
International Nuclear Information System (INIS)
Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A
2011-01-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
Stochastic Gravity: Theory and Applications
Directory of Open Access Journals (Sweden)
Hu Bei Lok
2004-01-01
Full Text Available Whereas semiclassical gravity is based on the semiclassical Einstein equation with sources given by the expectation value of the stress-energy tensor of quantum fields, stochastic semiclassical gravity is based on the Einstein-Langevin equation, which has in addition sources due to the noise kernel. The noise kernel is the vacuum expectation value of the (operator-valued stress-energy bi-tensor which describes the fluctuations of quantum matter fields in curved spacetimes. In the first part, we describe the fundamentals of this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the stress-energy tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open systems concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise, and decoherence. We then focus on the properties of the stress-energy bi-tensor. We obtain a general expression for the noise kernel of a quantum field defined at two distinct points in an arbitrary curved spacetime as products of covariant derivatives of the quantum field's Green function. In the second part, we describe three applications of stochastic gravity theory. First, we consider metric perturbations in a Minkowski spacetime. We offer an analytical solution of the Einstein-Langevin equation and compute the two-point correlation functions for the linearized Einstein tensor and for the metric perturbations. Second, we discuss structure formation from the stochastic gravity viewpoint, which can go beyond the standard treatment by incorporating the full quantum effect of the inflaton fluctuations. Third, we discuss the backreaction
Lujan, Richard E.
2001-01-01
A mechanical gravity brake that prevents hoisted loads within a shaft from free-falling when a loss of hoisting force occurs. A loss of hoist lifting force may occur in a number of situations, for example if a hoist cable were to break, the brakes were to fail on a winch, or the hoist mechanism itself were to fail. Under normal hoisting conditions, the gravity brake of the invention is subject to an upward lifting force from the hoist and a downward pulling force from a suspended load. If the lifting force should suddenly cease, the loss of differential forces on the gravity brake in free-fall is translated to extend a set of brakes against the walls of the shaft to stop the free fall descent of the gravity brake and attached load.
Directory of Open Access Journals (Sweden)
Barceló Carlos
2005-12-01
Full Text Available Analogue models of (and for gravity have a long and distinguished history dating back to the earliest years of general relativity. In this review article we will discuss the history, aims, results, and future prospects for the various analogue models. We start the discussion by presenting a particularly simple example of an analogue model, before exploring the rich history and complex tapestry of models discussed in the literature. The last decade in particular has seen a remarkable and sustained development of analogue gravity ideas, leading to some hundreds of published articles, a workshop, two books, and this review article. Future prospects for the analogue gravity programme also look promising, both on the experimental front (where technology is rapidly advancing and on the theoretical front (where variants of analogue models can be used as a springboard for radical attacks on the problem of quantum gravity.
DEFF Research Database (Denmark)
Walder, Christian; Henao, Ricardo; Mørup, Morten
We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....
Alvarez, Enrique
2004-01-01
Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchild's spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the inter-connectedness of space, cause the...
Model selection in kernel ridge regression
DEFF Research Database (Denmark)
Exterkate, Peter
2013-01-01
Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...
Multiple Kernel Learning with Data Augmentation
2016-11-22
JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to
A kernel version of multivariate alteration detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2013-01-01
Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....
Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.
2012-01-01
The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an
Complex use of cottonseed kernels
Energy Technology Data Exchange (ETDEWEB)
Glushenkova, A I
1977-01-01
A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.
Kernel regression with functional response
Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe
2011-01-01
We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.
GRIM : Leveraging GPUs for Kernel integrity monitoring
Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris
2016-01-01
Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious
Paramecium: An Extensible Object-Based Kernel
van Doorn, L.; Homburg, P.; Tanenbaum, A.S.
1995-01-01
In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection
Local Observed-Score Kernel Equating
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Veto-Consensus Multiple Kernel Learning
Zhou, Y.; Hu, N.; Spanos, C.J.
2016-01-01
We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The
Directory of Open Access Journals (Sweden)
Senyue Zhang
2016-01-01
Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.
Directory of Open Access Journals (Sweden)
Carlos Barceló
2011-05-01
Full Text Available Analogue gravity is a research programme which investigates analogues of general relativistic gravitational fields within other physical systems, typically but not exclusively condensed matter systems, with the aim of gaining new insights into their corresponding problems. Analogue models of (and for gravity have a long and distinguished history dating back to the earliest years of general relativity. In this review article we will discuss the history, aims, results, and future prospects for the various analogue models. We start the discussion by presenting a particularly simple example of an analogue model, before exploring the rich history and complex tapestry of models discussed in the literature. The last decade in particular has seen a remarkable and sustained development of analogue gravity ideas, leading to some hundreds of published articles, a workshop, two books, and this review article. Future prospects for the analogue gravity programme also look promising, both on the experimental front (where technology is rapidly advancing and on the theoretical front (where variants of analogue models can be used as a springboard for radical attacks on the problem of quantum gravity.
Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan
2018-05-01
With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America
International Nuclear Information System (INIS)
Giribet, G E
2005-01-01
Claus Kiefer presents his book, Quantum Gravity, with his hope that '[the] book will convince readers of [the] outstanding problem [of unification and quantum gravity] and encourage them to work on its solution'. With this aim, the author presents a clear exposition of the fundamental concepts of gravity and the steps towards the understanding of its quantum aspects. The main part of the text is dedicated to the analysis of standard topics in the formulation of general relativity. An analysis of the Hamiltonian formulation of general relativity and the canonical quantization of gravity is performed in detail. Chapters four, five and eight provide a pedagogical introduction to the basic concepts of gravitational physics. In particular, aspects such as the quantization of constrained systems, the role played by the quadratic constraint, the ADM decomposition, the Wheeler-de Witt equation and the problem of time are treated in an expert and concise way. Moreover, other specific topics, such as the minisuperspace approach and the feasibility of defining extrinsic times for certain models, are discussed as well. The ninth chapter of the book is dedicated to the quantum gravitational aspects of string theory. Here, a minimalistic but clear introduction to string theory is presented, and this is actually done with emphasis on gravity. It is worth mentioning that no hard (nor explicit) computations are presented, even though the exposition covers the main features of the topic. For instance, black hole statistical physics (within the framework of string theory) is developed in a pedagogical and concise way by means of heuristical arguments. As the author asserts in the epilogue, the hope of the book is to give 'some impressions from progress' made in the study of quantum gravity since its beginning, i.e., since the end of 1920s. In my opinion, Kiefer's book does actually achieve this goal and gives an extensive review of the subject. (book review)
Credit scoring analysis using kernel discriminant
Widiharih, T.; Mukid, M. A.; Mustafid
2018-05-01
Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.
Testing Infrastructure for Operating System Kernel Development
DEFF Research Database (Denmark)
Walter, Maxwell; Karlsson, Sven
2014-01-01
Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....
Kernel parameter dependence in spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....
Pipinos, Savas
2010-01-01
This article describes one classroom activity in which the author simulates the Newtonian gravity, and employs the Euclidean Geometry with the use of new technologies (NT). The prerequisites for this activity were some knowledge of the formulae for a particle free fall in Physics and most certainly, a good understanding of the notion of similarity…
F.C. Gruau; J.T. Tromp (John)
1999-01-01
textabstractWe consider the problem of establishing gravity in cellular automata. In particular, when cellular automata states can be partitioned into empty, particle, and wall types, with the latter enclosing rectangular areas, we desire rules that will make the particles fall down and pile up on
Validation of Born Traveltime Kernels
Baig, A. M.; Dahlen, F. A.; Hung, S.
2001-12-01
Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.
Separation of oil palm shell and kernel by using kaolinite media
Directory of Open Access Journals (Sweden)
Sukpong Sirinupong
2003-05-01
Full Text Available The objective of this research is to investigate the possibility of using kaolinite from Ranong province as media in the oil palm shell and kernel separation process by means of heavy media separation. The effect of specific gravity of the slurry, type and amount of dispersant and type of clays on suspension of media and efficiency of separation were studied. It was found that the specific gravity of oil palm shell and kernel are 1.40 and 1.20 respectively. While the average specific gravity of kaolinite grade MRD-B85, RANONG-325 and commercial clay from Univanich Group. PCL., are 2.54, 2.65 and 2.46 respectively. It was apparent that the viscosity of clay slurry increased with the specific gravity of the slurry. For MRD-B85 and RANONG- 325 clays which have the average particle sizes of 10 and 12 microns, the pH of their slurries of about 5.84 and 6.33 respectively were obtained and at these conditions stability of the slurry rarely occurred and they could not be used for separation. However, these clays can also be utilized as media when dispersant such asCalgon or sodium silicate is applied to their slurries. It was found that the efficiency of separation depends on specific gravity and viscosity of the slurry, type and particle size of kaolinite and dosage of dispersant. The optimum separating conditions for MRD-B85 clay were the dosage of Calgon of 0.15% (or 1.5 kg/t of clay at the specific gravity of the slurry of 1.20-1.24 (27-32% Solids in which a pH of 6.14 and viscosity of 104 cP to very low value (could not be measured were obtained. Thus, kernel yielded 97.57-100% and shell contamination of 1.48-6.32% was achieved. While sodium silicate was applied to the slurry about 0.15% at the specific gravity of 1.22, pH of 6.74 and viscosity of 238 cP were obtained and kernel could be recovered 100% with shell contamination of 8.36%. When 0.15% Calgon or 0.25% sodium silicate was introduced to the RANONG-325 clay slurry at the specific gravity
RKRD: Runtime Kernel Rootkit Detection
Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.
In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.
Kernel Bayesian ART and ARTMAP.
Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan
2018-02-01
Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Feynman propagator for spin foam quantum gravity.
Oriti, Daniele
2005-03-25
We link the notion causality with the orientation of the spin foam 2-complex. We show that all current spin foam models are orientation independent. Using the technology of evolution kernels for quantum fields on Lie groups, we construct a generalized version of spin foam models, introducing an extra proper time variable. We prove that different ranges of integration for this variable lead to different classes of spin foam models: the usual ones, interpreted as the quantum gravity analogue of the Hadamard function of quantum field theory (QFT) or as inner products between quantum gravity states; and a new class of causal models, the quantum gravity analogue of the Feynman propagator in QFT, nontrivial function of the orientation data, and implying a notion of "timeless ordering".
International Nuclear Information System (INIS)
Isham, C.
1989-01-01
Gravitational effects are seen as arising from a curvature in spacetime. This must be reconciled with gravity's apparently passive role in quantum theory to achieve a satisfactory quantum theory of gravity. The development of grand unified theories has spurred the search, with forces being of equal strength at a unification energy of 10 15 - 10 18 GeV, with the ''Plank length'', Lp ≅ 10 -35 m. Fundamental principles of general relativity and quantum mechanics are outlined. Gravitons are shown to have spin-0, as mediators of gravitation force in the classical sense or spin-2 which are related to the quantisation of general relativity. Applying the ideas of supersymmetry to gravitation implies partners for the graviton, especially the massless spin 3/2 fermion called a gravitino. The concept of supersymmetric strings is introduced and discussed. (U.K.)
International Nuclear Information System (INIS)
Markov, M.A.; West, P.C.
1984-01-01
This book discusses the state of the art of quantum gravity, quantum effects in cosmology, quantum black-hole physics, recent developments in supergravity, and quantum gauge theories. Topics considered include the problems of general relativity, pregeometry, complete cosmological theories, quantum fluctuations in cosmology and galaxy formation, a new inflationary universe scenario, grand unified phase transitions and the early Universe, the generalized second law of thermodynamics, vacuum polarization near black holes, the relativity of vacuum, black hole evaporations and their cosmological consequences, currents in supersymmetric theories, the Kaluza-Klein theories, gauge algebra and quantization, and twistor theory. This volume constitutes the proceedings of the Second Seminar on Quantum Gravity held in Moscow in 1981
Theory of reproducing kernels and applications
Saitoh, Saburou
2016-01-01
This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...
A Transportable Gravity Gradiometer Based on Atom Interferometry
Yu, Nan; Thompson, Robert J.; Kellogg, James R.; Aveline, David C.; Maleki, Lute; Kohel, James M.
2010-01-01
A transportable atom interferometer-based gravity gradiometer has been developed at JPL to carry out measurements of Earth's gravity field at ever finer spatial resolutions, and to facilitate high-resolution monitoring of temporal variations in the gravity field from ground- and flight-based platforms. Existing satellite-based gravity missions such as CHAMP and GRACE measure the gravity field via precise monitoring of the motion of the satellites; i.e. the satellites themselves function as test masses. JPL's quantum gravity gradiometer employs a quantum phase measurement technique, similar to that employed in atomic clocks, made possible by recent advances in laser cooling and manipulation of atoms. This measurement technique is based on atomwave interferometry, and individual laser-cooled atoms are used as drag-free test masses. The quantum gravity gradiometer employs two identical atom interferometers as precision accelerometers to measure the difference in gravitational acceleration between two points (Figure 1). By using the same lasers for the manipulation of atoms in both interferometers, the accelerometers have a common reference frame and non-inertial accelerations are effectively rejected as common mode noise in the differential measurement of the gravity gradient. As a result, the dual atom interferometer-based gravity gradiometer allows gravity measurements on a moving platform, while achieving the same long-term stability of the best atomic clocks. In the laboratory-based prototype (Figure 2), the cesium atoms used in each atom interferometer are initially collected and cooled in two separate magneto-optic traps (MOTs). Each MOT, consisting of three orthogonal pairs of counter-propagating laser beams centered on a quadrupole magnetic field, collects up to 10(exp 9) atoms. These atoms are then launched vertically as in an atom fountain by switching off the magnetic field and introducing a slight frequency shift between pairs of lasers to create a moving
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-02-12
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-01-01
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Kernel principal component analysis for change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Morton, J.C.
2008-01-01
region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning
Is nonrelativistic gravity possible?
International Nuclear Information System (INIS)
Kocharyan, A. A.
2009-01-01
We study nonrelativistic gravity using the Hamiltonian formalism. For the dynamics of general relativity (relativistic gravity) the formalism is well known and called the Arnowitt-Deser-Misner (ADM) formalism. We show that if the lapse function is constrained correctly, then nonrelativistic gravity is described by a consistent Hamiltonian system. Surprisingly, nonrelativistic gravity can have solutions identical to relativistic gravity ones. In particular, (anti-)de Sitter black holes of Einstein gravity and IR limit of Horava gravity are locally identical.
Process for producing metal oxide kernels and kernels so obtained
International Nuclear Information System (INIS)
Lelievre, Bernard; Feugier, Andre.
1974-01-01
The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr
Hilbertian kernels and spline functions
Atteia, M
1992-01-01
In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.
International Nuclear Information System (INIS)
Schupp, P.
2007-01-01
Heuristic arguments suggest that the classical picture of smooth commutative spacetime should be replaced by some kind of quantum / noncommutative geometry at length scales and energies where quantum as well as gravitational effects are important. Motivated by this idea much research has been devoted to the study of quantum field theory on noncommutative spacetimes. More recently the focus has started to shift back to gravity in this context. We give an introductory overview to the formulation of general relativity in a noncommutative spacetime background and discuss the possibility of exact solutions. (author)
Dense Medium Machine Processing Method for Palm Kernel/ Shell ...
African Journals Online (AJOL)
ADOWIE PERE
Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge; Schuster, Gerard T.
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently
Ranking Support Vector Machine with Kernel Approximation
Directory of Open Access Journals (Sweden)
Kai Chen
2017-01-01
Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Sentiment classification with interpolated information diffusion kernels
Raaijmakers, S.
2007-01-01
Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of
Evolution kernel for the Dirac field
International Nuclear Information System (INIS)
Baaquie, B.E.
1982-06-01
The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)
Panel data specifications in nonparametric kernel regression
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard; Henningsen, Arne
parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...
Improving the Bandwidth Selection in Kernel Equating
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
Kernel Korner : The Linux keyboard driver
Brouwer, A.E.
1995-01-01
Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the
International Nuclear Information System (INIS)
Hooft, G.
2012-01-01
The dynamical degree of freedom for the gravitational force is the metric tensor, having 10 locally independent degrees of freedom (of which 4 can be used to fix the coordinate choice). In conformal gravity, we split this field into an overall scalar factor and a nine-component remainder. All unrenormalizable infinities are in this remainder, while the scalar component can be handled like any other scalar field such as the Higgs field. In this formalism, conformal symmetry is spontaneously broken. An imperative demand on any healthy quantum gravity theory is that black holes should be described as quantum systems with micro-states as dictated by the Hawking-Bekenstein theory. This requires conformal symmetry that may be broken spontaneously but not explicitly, and this means that all conformal anomalies must cancel out. Cancellation of conformal anomalies yields constraints on the matter sector as described by some universal field theory. Thus black hole physics may eventually be of help in the construction of unified field theories. (author)
Metabolic network prediction through pairwise rational kernels.
Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian
2014-09-26
Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy
National Oceanic and Atmospheric Administration, Department of Commerce — This data base (14,559 records) was received in January 1986. Principal gravity parameters include elevation and observed gravity. The observed gravity values are...
National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Absolute Gravity data (78 stations) was received in July 1993. Principal gravity parameters include Gravity Value, Uncertainty, and Vertical Gradient. The...
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.
Exact RG flow equations and quantum gravity
de Alwis, S. P.
2018-03-01
We discuss the different forms of the functional RG equation and their relation to each other. In particular we suggest a generalized background field version that is close in spirit to the Polchinski equation as an alternative to the Wetterich equation to study Weinberg's asymptotic safety program for defining quantum gravity, and argue that the former is better suited for this purpose. Using the heat kernel expansion and proper time regularization we find evidence in support of this program in agreement with previous work.
Newtonian gravity in loop quantum gravity
Smolin, Lee
2010-01-01
We apply a recent argument of Verlinde to loop quantum gravity, to conclude that Newton's law of gravity emerges in an appropriate limit and setting. This is possible because the relationship between area and entropy is realized in loop quantum gravity when boundaries are imposed on a quantum spacetime.
Putting Priors in Mixture Density Mercer Kernels
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Anisotropic hydrodynamics with a scalar collisional kernel
Almaalol, Dekrayat; Strickland, Michael
2018-04-01
Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.
Evaluation of palm kernel fibers (PKFs for production of asbestos-free automotive brake pads
Directory of Open Access Journals (Sweden)
K.K. Ikpambese
2016-01-01
Full Text Available In this study, asbestos-free automotive brake pads produced from palm kernel fibers with epoxy-resin binder was evaluated. Resins varied in formulations and properties such as friction coefficient, wear rate, hardness test, porosity, noise level, temperature, specific gravity, stopping time, moisture effects, surface roughness, oil and water absorptions rates, and microstructure examination were investigated. Other basic engineering properties of mechanical overload, thermal deformation fading behaviour shear strength, cracking resistance, over-heat recovery, and effect on rotor disc, caliper pressure, pad grip effect and pad dusting effect were also investigated. The results obtained indicated that the wear rate, coefficient of friction, noise level, temperature, and stopping time of the produced brake pads increased as the speed increases. The results also show that porosity, hardness, moisture content, specific gravity, surface roughness, and oil and water absorption rates remained constant with increase in speed. The result of microstructure examination revealed that worm surfaces were characterized by abrasion wear where the asperities were ploughed thereby exposing the white region of palm kernel fibers, thus increasing the smoothness of the friction materials. Sample S6 with composition of 40% epoxy-resin, 10% palm wastes, 6% Al2O3, 29% graphite, and 15% calcium carbonate gave better properties. The result indicated that palm kernel fibers can be effectively used as a replacement for asbestos in brake pad production.
Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm
African Journals Online (AJOL)
In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...
NLO corrections to the Kernel of the BKP-equations
Energy Technology Data Exchange (ETDEWEB)
Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)
2012-10-02
We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.
Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...
African Journals Online (AJOL)
This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...
African Journals Online (AJOL)
This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Kernel maximum autocorrelation factor and minimum noise fraction transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...
7 CFR 51.2296 - Three-fourths half kernel.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...
7 CFR 981.401 - Adjusted kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...
7 CFR 51.1403 - Kernel color classification.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...
The Linux kernel as flexible product-line architecture
M. de Jonge (Merijn)
2002-01-01
textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what
Digital signal processing with kernel methods
Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo
2018-01-01
A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...
Parsimonious Wavelet Kernel Extreme Learning Machine
Directory of Open Access Journals (Sweden)
Wang Qin
2015-11-01
Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.
Ensemble Approach to Building Mercer Kernels
National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...
Byrne, Michael
1999-01-01
Einstein said that gravity is an acceleration like any other acceleration. But gravity causes relativistic effects at non-relativistic speeds; so gravity could have relativistic origins. And since the strong force is thought to cause most of mass, and mass is proportional to gravity; the strong force is therefore also proportional to gravity. The strong force could thus cause relativistic increases of mass through the creation of virtual gluons; along with a comparable contraction of space ar...
Control Transfer in Operating System Kernels
1994-05-13
microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating
Uranium kernel formation via internal gelation
International Nuclear Information System (INIS)
Hunt, R.D.; Collins, J.L.
2004-01-01
In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)
Quantum tomography, phase-space observables and generalized Markov kernels
International Nuclear Information System (INIS)
Pellonpaeae, Juha-Pekka
2009-01-01
We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.
Sitompul, Monica Angelina
2015-01-01
Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...
Chiral gravity, log gravity, and extremal CFT
International Nuclear Information System (INIS)
Maloney, Alexander; Song Wei; Strominger, Andrew
2010-01-01
We show that the linearization of all exact solutions of classical chiral gravity around the AdS 3 vacuum have positive energy. Nonchiral and negative-energy solutions of the linearized equations are infrared divergent at second order, and so are removed from the spectrum. In other words, chirality is confined and the equations of motion have linearization instabilities. We prove that the only stationary, axially symmetric solutions of chiral gravity are BTZ black holes, which have positive energy. It is further shown that classical log gravity--the theory with logarithmically relaxed boundary conditions--has finite asymptotic symmetry generators but is not chiral and hence may be dual at the quantum level to a logarithmic conformal field theories (CFT). Moreover we show that log gravity contains chiral gravity within it as a decoupled charge superselection sector. We formally evaluate the Euclidean sum over geometries of chiral gravity and show that it gives precisely the holomorphic extremal CFT partition function. The modular invariance and integrality of the expansion coefficients of this partition function are consistent with the existence of an exact quantum theory of chiral gravity. We argue that the problem of quantizing chiral gravity is the holographic dual of the problem of constructing an extremal CFT, while quantizing log gravity is dual to the problem of constructing a logarithmic extremal CFT.
Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM
Directory of Open Access Journals (Sweden)
Chenchao Zhao
2018-01-01
Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.
Jourde, K.; Gibert, D.; Marteau, J.
2015-08-01
This paper examines how the resolution of small-scale geological density models is improved through the fusion of information provided by gravity measurements and density muon radiographies. Muon radiography aims at determining the density of geological bodies by measuring their screening effect on the natural flux of cosmic muons. Muon radiography essentially works like a medical X-ray scan and integrates density information along elongated narrow conical volumes. Gravity measurements are linked to density by a 3-D integration encompassing the whole studied domain. We establish the mathematical expressions of these integration formulas - called acquisition kernels - and derive the resolving kernels that are spatial filters relating the true unknown density structure to the density distribution actually recovered from the available data. The resolving kernel approach allows one to quantitatively describe the improvement of the resolution of the density models achieved by merging gravity data and muon radiographies. The method developed in this paper may be used to optimally design the geometry of the field measurements to be performed in order to obtain a given spatial resolution pattern of the density model to be constructed. The resolving kernels derived in the joined muon-gravimetry case indicate that gravity data are almost useless for constraining the density structure in regions sampled by more than two muon tomography acquisitions. Interestingly, the resolution in deeper regions not sampled by muon tomography is significantly improved by joining the two techniques. The method is illustrated with examples for the La Soufrière volcano of Guadeloupe.
Aflatoxin contamination of developing corn kernels.
Amer, M A
2005-01-01
Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.
Analog forecasting with dynamics-adapted kernels
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
International Nuclear Information System (INIS)
Schoutens, K.; van Nieuwenhuizen, P.; State Univ. of New York, Stony Brook, NY
1991-11-01
We briefly review some results in the theory of quantum W 3 gravity in the chiral gauge. We compare them with similar results in the analogous but simpler cases of d = 2 induced gauge theories and d = 2 induced gravity
... medlineplus.gov/ency/article/003587.htm Urine specific gravity test To use the sharing features on this page, please enable JavaScript. Urine specific gravity is a laboratory test that shows the concentration ...
Cadiz, California Gravity Data
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data (32 records) were gathered by Mr. Seth I. Gutman for AridTech Inc., Denver, Colorado using a Worden Prospector gravity meter. This data base...
National Oceanic and Atmospheric Administration, Department of Commerce — The Central Andes gravity data (6,151 records) were compiled by Professor Gotze and the MIGRA Group. This data base was received in April, 1997. Principal gravity...
National Oceanic and Atmospheric Administration, Department of Commerce — The Decade of North American Geology (DNAG) gravity grid values, spaced at 6 km, were used to produce the Gravity Anomaly Map of North America (1987; scale...
International Nuclear Information System (INIS)
Pinheiro, R.
1979-01-01
The properties and production of gravitational radiation are described. The prospects for their detection are considered including the Weber apparatus and gravity-wave telescopes. Possibilities of gravity-wave astronomy are noted
Northern Oklahoma Gravity Data
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data (710 records) were compiled by Professor Ahern. This data base was received in June 1992. Principal gravity parameters include latitude,...
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data (24,284 records) were compiled by the U. S. Geological Survey. This data base was received on February 23, 1993. Principal gravity...
OS X and iOS Kernel Programming
Halvorsen, Ole Henry
2011-01-01
OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i
The Classification of Diabetes Mellitus Using Kernel k-means
Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.
2018-01-01
Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.
Object classification and detection with context kernel descriptors
DEFF Research Database (Denmark)
Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping
2014-01-01
Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...
International Nuclear Information System (INIS)
Vega, H.J. de
1990-01-01
One of the main challenges in theoretical physics today is the unification of all interactions including gravity. At present, string theories appear as the most promising candidates to achieve such a unification. However, gravity has not completely been incorporated in string theory, many technical and conceptual problems remain and a full quantum theory of gravity is still non-existent. Our aim is to properly understand strings in the context of quantum gravity. Attempts towards this are reviewed. (author)
International Nuclear Information System (INIS)
La, H.
1992-01-01
A new geometric formulation of Liouville gravity based on the area preserving diffeo-morphism is given and a possible alternative to reinterpret Liouville gravity is suggested, namely, a scalar field coupled to two-dimensional gravity with a curvature constraint
Bergshoeff, E.; Pope, C.N.; Stelle, K.S.
1990-01-01
We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.
Induced quantum conformal gravity
International Nuclear Information System (INIS)
Novozhilov, Y.V.; Vassilevich, D.V.
1988-11-01
Quantum gravity is considered as induced by matter degrees of freedom and related to the symmetry breakdown in the low energy region of a non-Abelian gauge theory of fundamental fields. An effective action for quantum conformal gravity is derived where both the gravitational constant and conformal kinetic term are positive. Relation with induced classical gravity is established. (author). 15 refs
Amelino-Camelia, Giovanni
2003-01-01
Comment: 9 pages, LaTex. These notes were prepared while working on an invited contribution to the November 2003 issue of Physics World, which focused on quantum gravity. They intend to give a non-technical introduction (accessible to readers from outside quantum gravity) to "Quantum Gravity Phenomenology"
MacKeown, P. K.
1984-01-01
Clarifies two concepts of gravity--those of a fictitious force and those of how space and time may have geometry. Reviews the position of Newton's theory of gravity in the context of special relativity and considers why gravity (as distinct from electromagnetics) lends itself to Einstein's revolutionary interpretation. (JN)
Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu
2017-12-15
Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.
Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates
International Nuclear Information System (INIS)
Hanft, J.M.; Jones, R.J.
1986-01-01
This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose
Fluidization calculation on nuclear fuel kernel coating
International Nuclear Information System (INIS)
Sukarsono; Wardaya; Indra-Suryawan
1996-01-01
The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3
Comparative Analysis of Kernel Methods for Statistical Shape Learning
National Research Council Canada - National Science Library
Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen
2006-01-01
.... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
Influence of differently processed mango seed kernel meal on ...
African Journals Online (AJOL)
Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.
On methods to increase the security of the Linux kernel
International Nuclear Information System (INIS)
Matvejchikov, I.V.
2014-01-01
Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru
Linear and kernel methods for multi- and hypervariate change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Canty, Morton J.
2010-01-01
. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...
Kernel methods in orthogonalization of multi- and hypervariate data
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...
International Nuclear Information System (INIS)
Dvali, Gia; Kolanovic, Marko; Nitti, Francesco; Gabadadze, Gregory
2002-01-01
We propose a framework in which the quantum gravity scale can be as low as 10 -3 eV. The key assumption is that the standard model ultraviolet cutoff is much higher than the quantum gravity scale. This ensures that we observe conventional weak gravity. We construct an explicit brane-world model in which the brane-localized standard model is coupled to strong 5D gravity of infinite-volume flat extra space. Because of the high ultraviolet scale, the standard model fields generate a large graviton kinetic term on the brane. This kinetic term 'shields' the standard model from the strong bulk gravity. As a result, an observer on the brane sees weak 4D gravity up to astronomically large distances beyond which gravity becomes five dimensional. Modeling quantum gravity above its scale by the closed string spectrum we show that the shielding phenomenon protects the standard model from an apparent phenomenological catastrophe due to the exponentially large number of light string states. The collider experiments, astrophysics, cosmology and gravity measurements independently point to the same lower bound on the quantum gravity scale, 10 -3 eV. For this value the model has experimental signatures both for colliders and for submillimeter gravity measurements. Black holes reveal certain interesting properties in this framework
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function
Relationship between attenuation coefficients and dose-spread kernels
International Nuclear Information System (INIS)
Boyer, A.L.
1988-01-01
Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
International Nuclear Information System (INIS)
Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric
2010-01-01
Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Consistent Estimation of Pricing Kernels from Noisy Price Data
Vladislav Kargin
2003-01-01
If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.
Quantum logic in dagger kernel categories
Heunen, C.; Jacobs, B.P.F.
2009-01-01
This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial
Quantum logic in dagger kernel categories
Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P.
2011-01-01
This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial
Symbol recognition with kernel density matching.
Zhang, Wan; Wenyin, Liu; Zhang, Kun
2006-12-01
We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.
Flexible Scheduling in Multimedia Kernels: An Overview
Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.
1999-01-01
Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more
Reproducing kernel Hilbert spaces of Gaussian priors
Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.
2008-01-01
We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described
A synthesis of empirical plant dispersal kernels
Czech Academy of Sciences Publication Activity Database
Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.
2017-01-01
Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016
Analytic continuation of weighted Bergman kernels
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2010-01-01
Roč. 94, č. 6 (2010), s. 622-650 ISSN 0021-7824 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * analytic continuation * Toeplitz operator Subject RIV: BA - General Mathematics Impact factor: 1.450, year: 2010 http://www.sciencedirect.com/science/article/pii/S0021782410000942
On convergence of kernel learning estimators
Norkin, V.I.; Keyzer, M.A.
2009-01-01
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability
Analytic properties of the Virasoro modular kernel
Energy Technology Data Exchange (ETDEWEB)
Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)
2017-06-15
On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)
Kernel based subspace projection of hyperspectral images
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten
In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Scattering kernels and cross sections working group
International Nuclear Information System (INIS)
Russell, G.; MacFarlane, B.; Brun, T.
1998-01-01
Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics
Enhanced gluten properties in soft kernel durum wheat
Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...
Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...
African Journals Online (AJOL)
Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...
Stable Kernel Representations as Nonlinear Left Coprime Factorizations
Paice, A.D.B.; Schaft, A.J. van der
1994-01-01
A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel
7 CFR 981.60 - Determination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...
21 CFR 176.350 - Tamarind seed kernel powder.
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...
End-use quality of soft kernel durum wheat
Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...
Heat kernel analysis for Bessel operators on symmetric cones
DEFF Research Database (Denmark)
Möllers, Jan
2014-01-01
. The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...
A Fast and Simple Graph Kernel for RDF
de Vries, G.K.D.; de Rooij, S.
2013-01-01
In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster
7 CFR 981.61 - Redetermination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...
Single pass kernel k-means clustering method
Indian Academy of Sciences (India)
paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.
Einstein gravity emerging from quantum weyl gravity
International Nuclear Information System (INIS)
Zee, A.
1983-01-01
We advocate a conformal invariant world described by the sum of the Weyl, Dirac, and Yang-Mills action. Quantum fluctuations bring back Einstein gravity so that the long-distance phenomenology is as observed. Formulas for the induced Newton's constant and Eddington's constant are derived in quantized Weyl gravity. We show that the analogue of the trace anomaly for the Weyl action is structurally similar to that for the Yang-Mills action
Evaluation of the Lubricating Properties of Palm Kernel Oil
Directory of Open Access Journals (Sweden)
John J MUSA
2009-07-01
Full Text Available The search for renewable energy resources continues to attract attention in recent times as fossil fuels such as petroleum, coal and natural gas, which are been used to meet the energy needs of man are associated with negative environmental impacts such as global warming. Biodiesel offered reduced exhaust emissions, improved biodegradability, reduced toxicity and higher carotene rating which can improve performance and clean up emissions. Standard methods were used to determine the physical and chemical properties of the oil, which includes the Density, Viscosity, flash/fire point, carbon residue, volatility and Specific Gravity were determined by chemical experimental analysis. The flash/fire points of the Heavy duty oil (SAE 40 and Light duty oil (SAE 30 is 260/300(°C and 243/290(°C respectively while the pour points of the samples are 22°C for palm kernel oil while 9°C and 21°C for SAE 40and SAE 30 respectively.
Scuba: scalable kernel-based gene prioritization.
Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio
2018-01-25
The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .
Kernel based orthogonalization for change detection in hyperspectral images
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...
A laser optical method for detecting corn kernel defects
Energy Technology Data Exchange (ETDEWEB)
Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.
1984-01-01
An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws
Directory of Open Access Journals (Sweden)
Mohammed D. ABDULMALIK
2008-06-01
Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.
Difference between standard and quasi-conformal BFKL kernels
International Nuclear Information System (INIS)
Fadin, V.S.; Fiore, R.; Papa, A.
2012-01-01
As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.
International Nuclear Information System (INIS)
Brown, J.D.
1988-01-01
This book addresses the subject of gravity theories in two and three spacetime dimensions. The prevailing philosophy is that lower dimensional models of gravity provide a useful arena for developing new ideas and insights, which are applicable to four dimensional gravity. The first chapter consists of a comprehensive introduction to both two and three dimensional gravity, including a discussion of their basic structures. In the second chapter, the asymptotic structure of three dimensional Einstein gravity with a negative cosmological constant is analyzed. The third chapter contains a treatment of the effects of matter sources in classical two dimensional gravity. The fourth chapter gives a complete analysis of particle pair creation by electric and gravitational fields in two dimensions, and the resulting effect on the cosmological constant
Gravity interpretation via EULDPH
International Nuclear Information System (INIS)
Ebrahimzadeh Ardestani, V.
2003-01-01
Euler's homogeneity equation for determining the coordinates of the source body especially to estimate the depth (EULDPH) is discussed at this paper. This method is applied to synthetic and high-resolution real data such as gradiometric or microgravity data. Low-quality gravity data especially in the areas with a complex geology structure has rarely been used. The Bouguer gravity anomalies are computed from absolute gravity data after the required corrections. Bouguer anomaly is transferred to residual gravity anomaly. The gravity gradients are estimated from residual anomaly values. Bouguer anomaly is the gravity gradients, using EULDPH. The coordinates of the perturbing body will be determined. Two field examples one in the east of Tehran (Mard Abad) where we would like to determine the location of the anomaly (hydrocarbon) and another in the south-east of Iran close to the border with Afghanistan (Nosrat Abad) where we are exploring chromite are presented
International Nuclear Information System (INIS)
Mielke, Eckehard W.
2006-01-01
Anomalies in Yang-Mills type gauge theories of gravity are reviewed. Particular attention is paid to the relation between the Dirac spin, the axial current j5 and the non-covariant gauge spin C. Using diagrammatic techniques, we show that only generalizations of the U(1)- Pontrjagin four-form F and F = dC arise in the chiral anomaly, even when coupled to gravity. Implications for Ashtekar's canonical approach to quantum gravity are discussed
Directory of Open Access Journals (Sweden)
Animesh Mukherjee
1991-01-01
Full Text Available Based upon Biot's [1965] theory of initial stresses of hydrostatic nature produced by the effect of gravity, a study is made of surface waves in higher order visco-elastic media under the influence of gravity. The equation for the wave velocity of Stonely waves in the presence of viscous and gravitational effects is obtained. This is followed by particular cases of surface waves including Rayleigh waves and Love waves in the presence of viscous and gravity effects. In all cases the wave-velocity equations are found to be in perfect agreement with the corresponding classical results when the effects of gravity and viscosity are neglected.
International Nuclear Information System (INIS)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
Classical Weyl transverse gravity
Energy Technology Data Exchange (ETDEWEB)
Oda, Ichiro [University of the Ryukyus, Department of Physics, Faculty of Science, Nishihara, Okinawa (Japan)
2017-05-15
We study various classical aspects of the Weyl transverse (WTDiff) gravity in a general space-time dimension. First of all, we clarify a classical equivalence among three kinds of gravitational theories, those are, the conformally invariant scalar tensor gravity, Einstein's general relativity and the WTDiff gravity via the gauge-fixing procedure. Secondly, we show that in the WTDiff gravity the cosmological constant is a mere integration constant as in unimodular gravity, but it does not receive any radiative corrections unlike the unimodular gravity. A key point in this proof is to construct a covariantly conserved energy-momentum tensor, which is achieved on the basis of this equivalence relation. Thirdly, we demonstrate that the Noether current for the Weyl transformation is identically vanishing, thereby implying that the Weyl symmetry existing in both the conformally invariant scalar tensor gravity and the WTDiff gravity is a ''fake'' symmetry. We find it possible to extend this proof to all matter fields, i.e. the Weyl-invariant scalar, vector and spinor fields. Fourthly, it is explicitly shown that in the WTDiff gravity the Schwarzschild black hole metric and a charged black hole one are classical solutions to the equations of motion only when they are expressed in the Cartesian coordinate system. Finally, we consider the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology and provide some exact solutions. (orig.)
Anomalous dimension in three-dimensional semiclassical gravity
International Nuclear Information System (INIS)
Alesci, Emanuele; Arzano, Michele
2012-01-01
The description of the phase space of relativistic particles coupled to three-dimensional Einstein gravity requires momenta which are coordinates on a group manifold rather than on ordinary Minkowski space. The corresponding field theory turns out to be a non-commutative field theory on configuration space and a group field theory on momentum space. Using basic non-commutative Fourier transform tools we introduce the notion of non-commutative heat-kernel associated with the Laplacian on the non-commutative configuration space. We show that the spectral dimension associated to the non-commutative heat kernel varies with the scale reaching a non-integer value smaller than three for Planckian diffusion scales.
Analytic scattering kernels for neutron thermalization studies
International Nuclear Information System (INIS)
Sears, V.F.
1990-01-01
Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Kernel-based tests for joint independence
DEFF Research Database (Denmark)
Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard
2018-01-01
if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...
Wilson Dslash Kernel From Lattice QCD Optimization
Energy Technology Data Exchange (ETDEWEB)
Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
A Kernel for Protein Secondary Structure Prediction
Guermeur , Yann; Lifchitz , Alain; Vert , Régis
2004-01-01
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...
Scalar contribution to the BFKL kernel
International Nuclear Information System (INIS)
Gerasimov, R. E.; Fadin, V. S.
2010-01-01
The contribution of scalar particles to the kernel of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation is calculated. A great cancellation between the virtual and real parts of this contribution, analogous to the cancellation in the quark contribution in QCD, is observed. The reason of this cancellation is discovered. This reason has a common nature for particles with any spin. Understanding of this reason permits to obtain the total contribution without the complicated calculations, which are necessary for finding separate pieces.
Weighted Bergman Kernels for Logarithmic Weights
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2010-01-01
Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/
Heat kernels and zeta functions on fractals
International Nuclear Information System (INIS)
Dunne, Gerald V
2012-01-01
On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)
Exploiting graph kernels for high performance biomedical relation extraction.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
2018-01-30
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM
Interior Alaska Bouguer Gravity Anomaly
National Oceanic and Atmospheric Administration, Department of Commerce — A 1 kilometer Complete Bouguer Anomaly gravity grid of interior Alaska. Only those grid cells within 10 kilometers of a gravity data point have gravity values....
Alaska/Yukon Geoid Improvement by a Data-Driven Stokes's Kernel Modification Approach
Li, Xiaopeng; Roman, Daniel R.
2015-04-01
Geoid modeling over Alaska of USA and Yukon Canada being a trans-national issue faces a great challenge primarily due to the inhomogeneous surface gravity data (Saleh et al, 2013) and the dynamic geology (Freymueller et al, 2008) as well as its complex geological rheology. Previous study (Roman and Li 2014) used updated satellite models (Bruinsma et al 2013) and newly acquired aerogravity data from the GRAV-D project (Smith 2007) to capture the gravity field changes in the targeting areas primarily in the middle-to-long wavelength. In CONUS, the geoid model was largely improved. However, the precision of the resulted geoid model in Alaska was still in the decimeter level, 19cm at the 32 tide bench marks and 24cm on the 202 GPS/Leveling bench marks that gives a total of 23.8cm at all of these calibrated surface control points, where the datum bias was removed. Conventional kernel modification methods in this area (Li and Wang 2011) had limited effects on improving the precision of the geoid models. To compensate the geoid miss fits, a new Stokes's kernel modification method based on a data-driven technique is presented in this study. First, the method was tested on simulated data sets (Fig. 1), where the geoid errors have been reduced by 2 orders of magnitude (Fig 2). For the real data sets, some iteration steps are required to overcome the rank deficiency problem caused by the limited control data that are irregularly distributed in the target area. For instance, after 3 iterations, the standard deviation dropped about 2.7cm (Fig 3). Modification at other critical degrees can further minimize the geoid model miss fits caused either by the gravity error or the remaining datum error in the control points.
Consistency of orthodox gravity
Energy Technology Data Exchange (ETDEWEB)
Bellucci, S. [INFN, Frascati (Italy). Laboratori Nazionali di Frascati; Shiekh, A. [International Centre for Theoretical Physics, Trieste (Italy)
1997-01-01
A recent proposal for quantizing gravity is investigated for self consistency. The existence of a fixed-point all-order solution is found, corresponding to a consistent quantum gravity. A criterion to unify couplings is suggested, by invoking an application of their argument to more complex systems.
Generalized pure Lovelock gravity
Concha, Patrick; Rodríguez, Evelyn
2017-11-01
We present a generalization of the n-dimensional (pure) Lovelock Gravity theory based on an enlarged Lorentz symmetry. In particular, we propose an alternative way to introduce a cosmological term. Interestingly, we show that the usual pure Lovelock gravity is recovered in a matter-free configuration. The five and six-dimensional cases are explicitly studied.
Generalized pure Lovelock gravity
Directory of Open Access Journals (Sweden)
Patrick Concha
2017-11-01
Full Text Available We present a generalization of the n-dimensional (pure Lovelock Gravity theory based on an enlarged Lorentz symmetry. In particular, we propose an alternative way to introduce a cosmological term. Interestingly, we show that the usual pure Lovelock gravity is recovered in a matter-free configuration. The five and six-dimensional cases are explicitly studied.
Identification of Fusarium damaged wheat kernels using image analysis
Directory of Open Access Journals (Sweden)
Ondřej Jirsa
2011-01-01
Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
Stochastic gravity: a primer with applications
International Nuclear Information System (INIS)
Hu, B L; Verdaguer, E
2003-01-01
Stochastic semiclassical gravity of the 1990s is a theory naturally evolved from semiclassical gravity of the 1970s and 1980s. It improves on the semiclassical Einstein equation with source given by the expectation value of the stress-energy tensor of quantum matter fields in curved spacetime by incorporating an additional source due to their fluctuations. In stochastic semiclassical gravity the main object of interest is the noise kernel, the vacuum expectation value of the (operator-valued) stress-energy bi-tensor, and the centrepiece is the (semiclassical) Einstein-Langevin equation. We describe this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the energy-momentum tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open system concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise and decoherence. We then describe the applications of stochastic gravity to the backreaction problems in cosmology and black-hole physics. In the first problem, we study the backreaction of conformally coupled quantum fields in a weakly inhomogeneous cosmology. In the second problem, we study the backreaction of a thermal field in the gravitational background of a quasi-static black hole (enclosed in a box) and its fluctuations. These examples serve to illustrate closely the ideas and techniques presented in the first part. This topical review is intended as a first introduction providing readers with some basic ideas and working knowledge. Thus, we place more emphasis here on pedagogy than completeness. (Further discussions of ideas, issues and ongoing research topics can be found
Stochastic gravity: a primer with applications
Energy Technology Data Exchange (ETDEWEB)
Hu, B L [Department of Physics, University of Maryland, College Park, MD 20742-4111 (United States); Verdaguer, E [Departament de Fisica Fonamental and CER en Astrofisica Fisica de Particules i Cosmologia, Universitat de Barcelona, Av. Diagonal 647, 08028 Barcelona (Spain)
2003-03-21
Stochastic semiclassical gravity of the 1990s is a theory naturally evolved from semiclassical gravity of the 1970s and 1980s. It improves on the semiclassical Einstein equation with source given by the expectation value of the stress-energy tensor of quantum matter fields in curved spacetime by incorporating an additional source due to their fluctuations. In stochastic semiclassical gravity the main object of interest is the noise kernel, the vacuum expectation value of the (operator-valued) stress-energy bi-tensor, and the centrepiece is the (semiclassical) Einstein-Langevin equation. We describe this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the energy-momentum tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open system concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise and decoherence. We then describe the applications of stochastic gravity to the backreaction problems in cosmology and black-hole physics. In the first problem, we study the backreaction of conformally coupled quantum fields in a weakly inhomogeneous cosmology. In the second problem, we study the backreaction of a thermal field in the gravitational background of a quasi-static black hole (enclosed in a box) and its fluctuations. These examples serve to illustrate closely the ideas and techniques presented in the first part. This topical review is intended as a first introduction providing readers with some basic ideas and working knowledge. Thus, we place more emphasis here on pedagogy than completeness. (Further discussions of ideas, issues and ongoing research topics can be found
Kernel based subspace projection of near infrared hyperspectral images of maize kernels
DEFF Research Database (Denmark)
Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben
2009-01-01
In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...
Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila
2018-05-07
Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.
Kernel based eigenvalue-decomposition methods for analysing ham
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming
2010-01-01
methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...
Classification of maize kernels using NIR hyperspectral imaging
DEFF Research Database (Denmark)
Williams, Paul; Kucheryavskiy, Sergey V.
2016-01-01
NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....
Ideal gas scattering kernel for energy dependent cross-sections
International Nuclear Information System (INIS)
Rothenstein, W.; Dagan, R.
1998-01-01
A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations
Directory of Open Access Journals (Sweden)
TIAN Jialei
2015-11-01
Full Text Available By using the ground as the boundary, Molodensky problem usually gets the solution in form of series. Higher order terms reflect the correction between a smooth surface and the ground boundary. Application difficulties arise from not only computational complexity and stability maintenance, but also data-intensiveness. Therefore, in this paper, starting from the application of external gravity disturbance, Green formula is used on digital terrain surface. In the case of ignoring the influence of horizontal component of the integral, the expression formula of external disturbance potential determined by boundary value consisted of ground gravity anomalies and height anomaly difference are obtained, whose kernel function is reciprocal of distance and Poisson core respectively. With this method, there is no need of continuation of ground data. And kernel function is concise, and suitable for the stochastic computation of external disturbing gravity field.
International Nuclear Information System (INIS)
Jevicki, A.; Ninomiya, M.
1985-01-01
We are concerned with applications of the simplicial discretization method (Regge calculus) to two-dimensional quantum gravity with emphasis on the physically relevant string model. Beginning with the discretization of gravity and matter we exhibit a discrete version of the conformal trace anomaly. Proceeding to the string problem we show how the direct approach of (finite difference) discretization based on Nambu action corresponds to unsatisfactory treatment of gravitational degrees. Based on the Regge approach we then propose a discretization corresponding to the Polyakov string. In this context we are led to a natural geometric version of the associated Liouville model and two-dimensional gravity. (orig.)
CERN. Geneva
2007-01-01
Of the four fundamental forces, gravity has been studied the longest, yet gravitational physics is one of the most rapidly developing areas of science today. This talk will give a broad brush survey of the past achievements and future prospects of general relativistic gravitational physics. Gravity is a two frontier science being important on both the very largest and smallest length scales considered in contemporary physics. Recent advances and future prospects will be surveyed in precision tests of general relativity, gravitational waves, black holes, cosmology and quantum gravity. The aim will be an overview of a subject that is becoming increasingly integrated with experiment and other branches of physics.
Directory of Open Access Journals (Sweden)
J. Ambjørn
1995-07-01
Full Text Available The 2-point function is the natural object in quantum gravity for extracting critical behavior: The exponential falloff of the 2-point function with geodesic distance determines the fractal dimension dH of space-time. The integral of the 2-point function determines the entropy exponent γ, i.e. the fractal structure related to baby universes, while the short distance behavior of the 2-point function connects γ and dH by a quantum gravity version of Fisher's scaling relation. We verify this behavior in the case of 2d gravity by explicit calculation.
Embedded real-time operating system micro kernel design
Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng
2005-12-01
Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.
An SVM model with hybrid kernels for hydrological time series
Wang, C.; Wang, H.; Zhao, X.; Xie, Q.
2017-12-01
Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.
Influence of wheat kernel physical properties on the pulverizing process.
Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula
2014-10-01
The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.
Dose point kernels for beta-emitting radioisotopes
International Nuclear Information System (INIS)
Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.
1986-01-01
Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables
Hadamard Kernel SVM with applications for breast cancer outcome predictions.
Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong
2017-12-21
Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.
Parameter optimization in the regularized kernel minimum noise fraction transformation
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2012-01-01
Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....
Analysis of Advanced Fuel Kernel Technology
International Nuclear Information System (INIS)
Oh, Seung Chul; Jeong, Kyung Chai; Kim, Yeon Ku; Kim, Young Min; Kim, Woong Ki; Lee, Young Woo; Cho, Moon Sung
2010-03-01
The reference fuel for prismatic reactor concepts is based on use of an LEU UCO TRISO fissile particle. This fuel form was selected in the early 1980s for large high-temperature gas-cooled reactor (HTGR) concepts using LEU, and the selection was reconfirmed for modular designs in the mid-1980s. Limited existing irradiation data on LEU UCO TRISO fuel indicate the need for a substantial improvement in performance with regard to in-pile gaseous fission product release. Existing accident testing data on LEU UCO TRISO fuel are extremely limited, but it is generally expected that performance would be similar to that of LEU UO 2 TRISO fuel if performance under irradiation were successfully improved. Initial HTGR fuel technology was based on carbide fuel forms. In the early 1980s, as HTGR technology was transitioning from high-enriched uranium (HEU) fuel to LEU fuel. An initial effort focused on LEU prismatic design for large HTGRs resulted in the selection of UCO kernels for the fissile particles and thorium oxide (ThO 2 ) for the fertile particles. The primary reason for selection of the UCO kernel over UO 2 was reduced CO pressure, allowing higher burnup for equivalent coating thicknesses and reduced potential for kernel migration, an important failure mechanism in earlier fuels. A subsequent assessment in the mid-1980s considering modular HTGR concepts again reached agreement on UCO for the fissile particle for a prismatic design. In the early 1990s, plant cost-reduction studies led to a decision to change the fertile material from thorium to natural uranium, primarily because of a lower long-term decay heat level for the natural uranium fissile particles. Ongoing economic optimization in combination with anticipated capabilities of the UCO particles resulted in peak fissile particle burnup projection of 26% FIMA in steam cycle and gas turbine concepts
Learning Rotation for Kernel Correlation Filter
Hamdi, Abdullah
2017-08-11
Kernel Correlation Filters have shown a very promising scheme for visual tracking in terms of speed and accuracy on several benchmarks. However it suffers from problems that affect its performance like occlusion, rotation and scale change. This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate rotation from one frame to another and enhance the detection of KCF. Hence it gains boost in overall accuracy in many of OBT50 detest videos with minimal additional computation.
Research of Performance Linux Kernel File Systems
Directory of Open Access Journals (Sweden)
Andrey Vladimirovich Ostroukh
2015-10-01
Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.
Fixed kernel regression for voltammogram feature extraction
International Nuclear Information System (INIS)
Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N
2009-01-01
Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals
Reciprocity relation for multichannel coupling kernels
International Nuclear Information System (INIS)
Cotanch, S.R.; Satchler, G.R.
1981-01-01
Assuming time-reversal invariance of the many-body Hamiltonian, it is proven that the kernels in a general coupled-channels formulation are symmetric, to within a specified spin-dependent phase, under the interchange of channel labels and coordinates. The theorem is valid for both Hermitian and suitably chosen non-Hermitian Hamiltonians which contain complex effective interactions. While of direct practical consequence for nuclear rearrangement reactions, the reciprocity relation is also appropriate for other areas of physics which involve coupled-channels analysis
Wheat kernel dimensions: how do they contribute to kernel weight at ...
Indian Academy of Sciences (India)
2011-12-02
Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.
DEFF Research Database (Denmark)
Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.
2013-01-01
correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data (71 records) were gathered by various governmental organizations (and academia) using a variety of methods. This data base was received in...
Bergshoeff, Eric A.; Hohm, Olaf; Townsend, Paul K.
2012-01-01
We present a brief review of New Massive Gravity, which is a unitary theory of massive gravitons in three dimensions obtained by considering a particular combination of the Einstein-Hilbert and curvature squared terms.
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data (65,164 records) were gathered by various governmental organizations (and academia) using a variety of methods. The data base was received...
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data (55,907 records) were gathered by various governmental organizations (and academia) using a variety of methods. This data base was received...
International Nuclear Information System (INIS)
Hertog, Thomas; Hollands, Stefan
2005-01-01
We study the stability of designer gravity theories, in which one considers gravity coupled to a tachyonic scalar with anti-de Sitter (AdS) boundary conditions defined by a smooth function W. We construct Hamiltonian generators of the asymptotic symmetries using the covariant phase space method of Wald et al and find that they differ from the spinor charges except when W = 0. The positivity of the spinor charge is used to establish a lower bound on the conserved energy of any solution that satisfies boundary conditions for which W has a global minimum. A large class of designer gravity theories therefore have a stable ground state, which the AdS/CFT correspondence indicates should be the lowest energy soliton. We make progress towards proving this by showing that minimum energy solutions are static. The generalization of our results to designer gravity theories in higher dimensions involving several tachyonic scalars is discussed
Carroll versus Galilei gravity
Energy Technology Data Exchange (ETDEWEB)
Bergshoeff, Eric [Centre for Theoretical Physics, University of Groningen,Nijenborgh 4, 9747 AG Groningen (Netherlands); Gomis, Joaquim [Departament de Física Cuàntica i Astrofísica and Institut de Ciències del Cosmos,Universitat de Barcelona,Martí i Franquès 1, E-08028 Barcelona (Spain); Rollier, Blaise [Centre for Theoretical Physics, University of Groningen,Nijenborgh 4, 9747 AG Groningen (Netherlands); Rosseel, Jan [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Vienna (Austria); Veldhuis, Tonnis ter [Centre for Theoretical Physics, University of Groningen,Nijenborgh 4, 9747 AG Groningen (Netherlands)
2017-03-30
We consider two distinct limits of General Relativity that in contrast to the standard non-relativistic limit can be taken at the level of the Einstein-Hilbert action instead of the equations of motion. One is a non-relativistic limit and leads to a so-called Galilei gravity theory, the other is an ultra-relativistic limit yielding a so-called Carroll gravity theory. We present both gravity theories in a first-order formalism and show that in both cases the equations of motion (i) lead to constraints on the geometry and (ii) are not sufficient to solve for all of the components of the connection fields in terms of the other fields. Using a second-order formalism we show that these independent components serve as Lagrange multipliers for the geometric constraints we found earlier. We point out a few noteworthy differences between Carroll and Galilei gravity and give some examples of matter couplings.
International Nuclear Information System (INIS)
Williams, Ruth M
2006-01-01
A review is given of a number of approaches to discrete quantum gravity, with a restriction to those likely to be relevant in four dimensions. This paper is dedicated to Rafael Sorkin on the occasion of his sixtieth birthday
Garland, G D; Wilson, J T
2013-01-01
The Earth's Shape and Gravity focuses on the progress of the use of geophysical methods in investigating the interior of the earth and its shape. The publication first offers information on gravity, geophysics, geodesy, and geology and gravity measurements. Discussions focus on gravity measurements and reductions, potential and equipotential surfaces, absolute and relative measurements, and gravity networks. The text then elaborates on the shape of the sea-level surface and reduction of gravity observations. The text takes a look at gravity anomalies and structures in the earth's crust; interp
Streaming gravity mode instability
International Nuclear Information System (INIS)
Wang Shui.
1989-05-01
In this paper, we study the stability of a current sheet with a sheared flow in a gravitational field which is perpendicular to the magnetic field and plasma flow. This mixing mode caused by a combined role of the sheared flow and gravity is named the streaming gravity mode instability. The conditions of this mode instability are discussed for an ideal four-layer model in the incompressible limit. (author). 5 refs
International Nuclear Information System (INIS)
Accioly, A.J.
1987-01-01
A possible classical route conducting towards a general relativity theory with higher-derivatives starting, in a sense, from first principles, is analysed. A completely causal vacuum solution with the symmetries of the Goedel universe is obtained in the framework of this higher-derivative gravity. This very peculiar and rare result is the first known vcuum solution of the fourth-order gravity theory that is not a solution of the corresponding Einstein's equations.(Author) [pt
Nelson, George
2004-01-01
Gravity is the name given to the phenomenon that any two masses, like you and the Earth, attract each other. One pulls on the Earth and the Earth pulls on one the same amount. And one does not have to be touching. Gravity acts over vast distances, like the 150 million kilometers (93 million miles) between the Earth and the Sun or the billions of…
Automated borehole gravity meter system
International Nuclear Information System (INIS)
Lautzenhiser, Th.V.; Wirtz, J.D.
1984-01-01
An automated borehole gravity meter system for measuring gravity within a wellbore. The gravity meter includes leveling devices for leveling the borehole gravity meter, displacement devices for applying forces to a gravity sensing device within the gravity meter to bring the gravity sensing device to a predetermined or null position. Electronic sensing and control devices are provided for (i) activating the displacement devices, (ii) sensing the forces applied to the gravity sensing device, (iii) electronically converting the values of the forces into a representation of the gravity at the location in the wellbore, and (iv) outputting such representation. The system further includes electronic control devices with the capability of correcting the representation of gravity for tidal effects, as well as, calculating and outputting the formation bulk density and/or porosity
Gravity Before Einstein and Schwinger Before Gravity
Trimble, Virginia L.
2012-05-01
Julian Schwinger was a child prodigy, and Albert Einstein distinctly not; Schwinger had something like 73 graduate students, and Einstein very few. But both thought gravity was important. They were not, of course, the first, nor is the disagreement on how one should think about gravity that is being highlighted here the first such dispute. The talk will explore, first, several of the earlier dichotomies: was gravity capable of action at a distance (Newton), or was a transmitting ether required (many others). Did it act on everything or only on solids (an odd idea of the Herschels that fed into their ideas of solar structure and sunspots)? Did gravitational information require time for its transmission? Is the exponent of r precisely 2, or 2 plus a smidgeon (a suggestion by Simon Newcomb among others)? And so forth. Second, I will try to say something about Scwinger's lesser known early work and how it might have prefigured his "source theory," beginning with "On the Interaction of Several Electrons (the unpublished, 1934 "zeroth paper," whose title somewhat reminds one of "On the Dynamics of an Asteroid," through his days at Berkeley with Oppenheimer, Gerjuoy, and others, to his application of ideas from nuclear physics to radar and of radar engineering techniques to problems in nuclear physics. And folks who think good jobs are difficult to come by now might want to contemplate the couple of years Schwinger spent teaching elementary physics at Purdue before moving on to the MIT Rad Lab for war work.
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Capozziello, Salvatore; De Laurentis, Mariafelicia
2011-01-01
Extended Theories of Gravity can be considered as a new paradigm to cure shortcomings of General Relativity at infrared and ultraviolet scales. They are an approach that, by preserving the undoubtedly positive results of Einstein’s theory, is aimed to address conceptual and experimental problems recently emerged in astrophysics, cosmology and High Energy Physics. In particular, the goal is to encompass, in a self-consistent scheme, problems like inflation, dark energy, dark matter, large scale structure and, first of all, to give at least an effective description of Quantum Gravity. We review the basic principles that any gravitational theory has to follow. The geometrical interpretation is discussed in a broad perspective in order to highlight the basic assumptions of General Relativity and its possible extensions in the general framework of gauge theories. Principles of such modifications are presented, focusing on specific classes of theories like f(R)-gravity and scalar–tensor gravity in the metric and Palatini approaches. The special role of torsion is also discussed. The conceptual features of these theories are fully explored and attention is paid to the issues of dynamical and conformal equivalence between them considering also the initial value problem. A number of viability criteria are presented considering the post-Newtonian and the post-Minkowskian limits. In particular, we discuss the problems of neutrino oscillations and gravitational waves in extended gravity. Finally, future perspectives of extended gravity are considered with possibility to go beyond a trial and error approach.
The Kernel Estimation in Biosystems Engineering
Directory of Open Access Journals (Sweden)
Esperanza Ayuga Téllez
2008-04-01
Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.
Consistent Valuation across Curves Using Pricing Kernels
Directory of Open Access Journals (Sweden)
Andrea Macrina
2018-03-01
Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.
Aligning Biomolecular Networks Using Modular Graph Kernels
Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant
Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.
Formal truncations of connected kernel equations
International Nuclear Information System (INIS)
Dixon, R.M.
1977-01-01
The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems
Scientific Computing Kernels on the Cell Processor
Energy Technology Data Exchange (ETDEWEB)
Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine
2007-04-04
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.
Delimiting areas of endemism through kernel interpolation.
Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Delimiting areas of endemism through kernel interpolation.
Directory of Open Access Journals (Sweden)
Ubirajara Oliveira
Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Extracting Feature Model Changes from the Linux Kernel Using FMDiff
Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.
2014-01-01
The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically
Replacement Value of Palm Kernel Meal for Maize on Carcass ...
African Journals Online (AJOL)
This study was conducted to evaluate the effect of replacing maize with palm kernel meal on nutrient composition, fatty acid profile and sensory qualities of the meat of turkeys fed the dietary treatments. Six dietary treatments were formulated using palm kernel meal to replace maize at 0, 20, 40, 60, 80 and 100 percent.
Effect of Palm Kernel Cake Replacement and Enzyme ...
African Journals Online (AJOL)
A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...
Capturing option anomalies with a variance-dependent pricing kernel
Christoffersen, P.; Heston, S.; Jacobs, K.
2013-01-01
We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is
Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan
This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...
Commutators of Integral Operators with Variable Kernels on Hardy ...
Indian Academy of Sciences (India)
Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 4. Commutators of Integral Operators with Variable Kernels on Hardy Spaces. Pu Zhang Kai Zhao. Volume 115 Issue 4 November 2005 pp 399-410 ... Keywords. Singular and fractional integrals; variable kernel; commutator; Hardy space.
Discrete non-parametric kernel estimation for global sensitivity analysis
International Nuclear Information System (INIS)
Senga Kiessé, Tristan; Ventura, Anne
2016-01-01
This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.
Kernel Function Tuning for Single-Layer Neural Networks
Czech Academy of Sciences Publication Activity Database
Vidnerová, Petra; Neruda, Roman
-, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/
Geodesic exponential kernels: When Curvature and Linearity Conflict
DEFF Research Database (Denmark)
Feragen, Aase; Lauze, François; Hauberg, Søren
2015-01-01
manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...
Denoising by semi-supervised kernel PCA preimaging
DEFF Research Database (Denmark)
Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai
2014-01-01
Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...
Design and construction of palm kernel cracking and separation ...
African Journals Online (AJOL)
Design and construction of palm kernel cracking and separation machines. ... Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Design and construction of palm kernel cracking and separation machines. JO Nordiana, K ...
Kernel Methods for Machine Learning with Life Science Applications
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie
Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...
Genetic relationship between plant growth, shoot and kernel sizes in ...
African Journals Online (AJOL)
Maize (Zea mays L.) ear vascular tissue transports nutrients that contribute to grain yield. To assess kernel heritabilities that govern ear development and plant growth, field studies were conducted to determine the combining abilities of parents that differed for kernel-size, grain-filling rates and shoot-size. Thirty two hybrids ...
A relationship between Gel'fand-Levitan and Marchenko kernels
International Nuclear Information System (INIS)
Kirst, T.; Von Geramb, H.V.; Amos, K.A.
1989-01-01
An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs
Boundary singularity of Poisson and harmonic Bergman kernels
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2015-01-01
Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170
Oven-drying reduces ruminal starch degradation in maize kernels
Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.
2014-01-01
The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels
Real time kernel performance monitoring with SystemTap
CERN. Geneva
2018-01-01
SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.
Resolvent kernel for the Kohn Laplacian on Heisenberg groups
Directory of Open Access Journals (Sweden)
Neur Eddine Askour
2002-07-01
Full Text Available We present a formula that relates the Kohn Laplacian on Heisenberg groups and the magnetic Laplacian. Then we obtain the resolvent kernel for the Kohn Laplacian and find its spectral density. We conclude by obtaining the Green kernel for fractional powers of the Kohn Laplacian.
Reproducing Kernels and Coherent States on Julia Sets
Energy Technology Data Exchange (ETDEWEB)
Thirulogasanthar, K., E-mail: santhar@cs.concordia.ca; Krzyzak, A. [Concordia University, Department of Computer Science and Software Engineering (Canada)], E-mail: krzyzak@cs.concordia.ca; Honnouvo, G. [Concordia University, Department of Mathematics and Statistics (Canada)], E-mail: g_honnouvo@yahoo.fr
2007-11-15
We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems.
Reproducing Kernels and Coherent States on Julia Sets
International Nuclear Information System (INIS)
Thirulogasanthar, K.; Krzyzak, A.; Honnouvo, G.
2007-01-01
We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems
A multi-scale kernel bundle for LDDMM
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard
2011-01-01
The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...
Comparison of Kernel Equating and Item Response Theory Equating Methods
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
An analysis of 1-D smoothed particle hydrodynamics kernels
International Nuclear Information System (INIS)
Fulk, D.A.; Quinn, D.W.
1996-01-01
In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs
Optimal Bandwidth Selection in Observed-Score Kernel Equating
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Computing an element in the lexicographic kernel of a game
Faigle, U.; Kern, Walter; Kuipers, Jeroen
The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a
Computing an element in the lexicographic kernel of a game
Faigle, U.; Kern, Walter; Kuipers, J.
2002-01-01
The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a
3-D waveform tomography sensitivity kernels for anisotropic media
Djebbi, Ramzi
2014-01-01
The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Open Problem: Kernel methods on manifolds and metric spaces
DEFF Research Database (Denmark)
Feragen, Aasa; Hauberg, Søren
2016-01-01
Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....
Compactly Supported Basis Functions as Support Vector Kernels for Classification.
Wittek, Peter; Tan, Chew Lim
2011-10-01
Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.
Directory of Open Access Journals (Sweden)
Abiodun Aladetuyi
2014-12-01
Full Text Available Palm kernel oil (PKO was recovered from spent bleaching earth with a yield of 16 %, using n-hexane while the fresh oil was extracted from palm kernel with n-hexane and a yield of 40.23% was obtained. These oils were trans-esterified with methanol under the same reaction conditions: 100 oC, 2 h reaction time, and oil-methanol ratio of 5:1 (w/v. The cocoa pod ash (CPA was compared with potassium hydroxide (KOH as catalyst. The percentage yields of biodiesel obtained from PKO catalysed by CPA and KOH were 94 and 90%, respectively. While the yields achieved using the recovered oil catalysed by CPA and KOH were measured at 86 and 81.20 %. The physico-chemical properties of the biodiesel produced showed that the flash point, viscosity, density, ash content, percentage carbon content, specific gravity and the acid value fell within American Society for Testing and Materials (ASTM specifications for biodiesel. The findings of this study suggest that agricultural residues such as CPA used in this study could be explored as alternatives for KOH catalyst for biodiesel production.
Improved modeling of clinical data with kernel methods.
Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart
2012-02-01
Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems
A method for manufacturing kernels of metallic oxides and the thus obtained kernels
International Nuclear Information System (INIS)
Lelievre Bernard; Feugier, Andre.
1973-01-01
A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Directory of Open Access Journals (Sweden)
Cahill R. T.
2015-10-01
Full Text Available A new quantum gravity experiment is reported with the data confirming the generali- sation of the Schrödinger equation to include the interaction of the wave function with dynamical space. Dynamical space turbulence, via this interaction process, raises and lowers the energy of the electron wave function, which is detected by observing conse- quent variations in the electron quantum barrier tunnelling rate in reverse-biased Zener diodes. This process has previously been reported and enabled the measurement of the speed of the dynamical space flow, which is consistent with numerous other detection experiments. The interaction process is dependent on the angle between the dynamical space flow velocity and the direction of the electron flow in the diode, and this depen- dence is experimentally demonstrated. This interaction process explains gravity as an emergent quantum process, so unifying quantum phenomena and gravity. Gravitational waves are easily detected.
Ortín, Tomás
2015-01-01
Self-contained and comprehensive, this definitive new edition of Gravity and Strings is a unique resource for graduate students and researchers in theoretical physics. From basic differential geometry through to the construction and study of black-hole and black-brane solutions in quantum gravity - via all the intermediate stages - this book provides a complete overview of the intersection of gravity, supergravity, and superstrings. Now fully revised, this second edition covers an extensive array of topics, including new material on non-linear electric-magnetic duality, the electric-tensor formalism, matter-coupled supergravity, supersymmetric solutions, the geometries of scalar manifolds appearing in 4- and 5-dimensional supergravities, and much more. Covering reviews of important solutions and numerous solution-generating techniques, and accompanied by an exhaustive index and bibliography, this is an exceptional reference work.
International Nuclear Information System (INIS)
Goetz, G.
1988-01-01
It is shown that the plane-wave solutions for the equations governing the motion of a self-gravitating isothermal fluid in Newtonian hydrodynamics are generated by a sine-Gordon equation which is solvable by an 'inverse scattering' transformation. A transformation procedure is outlined by means of which one can construct solutions of the gravity system out of a pair of solutions of the sine-Gordon equation, which are interrelated via an auto-Baecklund transformation. In general the solutions to the gravity system are obtained in a parametric representation in terms of characteristic coordinates. All solutions of the gravity system generated by the one-and two-soliton solutions of the sine-Gordon equation can be constructed explicitly. These might provide models for the evolution of flat structures as they are predicted to arise in the process of galaxy formation. (author)
International Nuclear Information System (INIS)
Rumpf, H.
1987-01-01
We begin with a naive application of the Parisi-Wu scheme to linearized gravity. This will lead into trouble as one peculiarity of the full theory, the indefiniteness of the Euclidean action, shows up already at this level. After discussing some proposals to overcome this problem, Minkowski space stochastic quantization will be introduced. This will still not result in an acceptable quantum theory of linearized gravity, as the Feynman propagator turns out to be non-causal. This defect will be remedied only after a careful analysis of general covariance in stochastic quantization has been performed. The analysis requires the notion of a metric on the manifold of metrics, and a natural candidate for this is singled out. With this a consistent stochastic quantization of Einstein gravity becomes possible. It is even possible, at least perturbatively, to return to the Euclidean regime. 25 refs. (Author)
Linder, Eric V.
2018-03-01
A subclass of the Horndeski modified gravity theory we call No Slip Gravity has particularly interesting properties: 1) a speed of gravitational wave propagation equal to the speed of light, 2) equality between the effective gravitational coupling strengths to matter and light, Gmatter and Glight, hence no slip between the metric potentials, yet difference from Newton's constant, and 3) suppressed growth to give better agreement with galaxy clustering observations. We explore the characteristics and implications of this theory, and project observational constraints. We also give a simple expression for the ratio of the gravitational wave standard siren distance to the photon standard candle distance, in this theory and others, and enable a direct comparison of modified gravity in structure growth and in gravitational waves, an important crosscheck.
Gerhardt, Claus
2018-01-01
A unified quantum theory incorporating the four fundamental forces of nature is one of the major open problems in physics. The Standard Model combines electro-magnetism, the strong force and the weak force, but ignores gravity. The quantization of gravity is therefore a necessary first step to achieve a unified quantum theory. In this monograph a canonical quantization of gravity has been achieved by quantizing a geometric evolution equation resulting in a gravitational wave equation in a globally hyperbolic spacetime. Applying the technique of separation of variables we obtain eigenvalue problems for temporal and spatial self-adjoint operators where the temporal operator has a pure point spectrum with eigenvalues $\\lambda_i$ and related eigenfunctions, while, for the spatial operator, it is possible to find corresponding eigendistributions for each of the eigenvalues $\\lambda_i$, if the Cauchy hypersurface is asymptotically Euclidean or if the quantized spacetime is a black hole with a negative cosmological ...
Airborne Gravity: NGS' Gravity Data for EN08 (2013)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for New York, Vermont, New Hampshire, Massachusettes, Maine, and Canada collected in 2013 over 1 survey. This data set is part of the Gravity...
Airborne Gravity: NGS' Gravity Data for TS01 (2014)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Puerto Rico and the Virgin Islands collected in 2009 over 1 survey. This data set is part of the Gravity for the Re-definition of the...
Airborne Gravity: NGS' Gravity Data for AN08 (2016)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2016 over one survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum...
Airborne Gravity: NGS' Gravity Data for CN02 (2013 & 2014)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Nebraska collected in 2013 & 2014 over 3 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical...
Airborne Gravity: NGS' Gravity Data for EN01 (2011)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for New York, Canada, and Lake Ontario collected in 2011 over 1 survey. This data set is part of the Gravity for the Re-definition of the...
Airborne Gravity: NGS' Gravity Data for AN03 (2010)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2010 and 2012 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical Datum...
Airborne Gravity: NGS' Gravity Data for EN06 (2016)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Maine, Canada, and the Atlantic Ocean collected in 2012 over 2 surveys. This data set is part of the Gravity for the Re-definition of the...
Airborne Gravity: NGS' Gravity Data for ES01 (2013)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Florida, the Bahamas, and the Atlantic Ocean collected in 2013 over 1 survey. This data set is part of the Gravity for the Re-definition of...
Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang
2018-05-14
In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS's solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.
Miniaturised Gravity Sensors for Remote Gravity Surveys.
Middlemiss, R. P.; Bramsiepe, S. G.; Hough, J.; Paul, D. J.; Rowan, S.; Samarelli, A.; Hammond, G.
2016-12-01
Gravimetry lets us see the world from a completely different perspective. The ability to measure tiny variations in gravitational acceleration (g), allows one to see not just the Earth's gravitational pull, but the influence of smaller objects. The more accurate the gravimeter, the smaller the objects one can see. Gravimetry has applications in many different fields: from tracking magma moving under volcanoes before eruptions; to locating hidden tunnels. The top commercial gravimeters weigh tens of kg and cost at least $100,000, limiting the situations in which they can be used. By contrast, smart phones use a MEMS (microelectromechanical system) accelerometer that can measure the orientation of the device. These are not nearly sensitive or stable enough to be used for the gravimetry but they are cheap, light-weight and mass-producible. At Glasgow University we have developed a MEMS device with both the stability and sensitivity for useful gravimetric measurements. This was demonstrated by a measurement of the Earth tides - the first time this has been achieved with a MEMS sensor. A gravimeter of this size opens up the possiblility for new gravity imaging modalities. Thousands of gravimeters could be networked over a survey site, storing data on an SD card or communicating wirelessly to a remote location. These devices could also be small enough to be carried by a UAVs: airborne gravity surveys could be carried out at low altitude by mulitple UAVs, or UAVs could be used to deliver ground based gravimeters to remote or inaccessible locations.
Pizzo, Nick
2017-11-01
A simple criterion for water particles to surf an underlying surface gravity wave is presented. It is found that particles travelling near the phase speed of the wave, in a geometrically confined region on the forward face of the crest, increase in speed. The criterion is derived using the equation of John (Commun. Pure Appl. Maths, vol. 6, 1953, pp. 497-503) for the motion of a zero-stress free surface under the action of gravity. As an example, a breaking water wave is theoretically and numerically examined. Implications for upper-ocean processes, for both shallow- and deep-water waves, are discussed.
International Nuclear Information System (INIS)
Romney, B.; Barrau, A.; Vidotto, F.; Le Meur, H.; Noui, K.
2011-01-01
The loop quantum gravity is the only theory that proposes a quantum description of space-time and therefore of gravitation. This theory predicts that space is not infinitely divisible but that is has a granular structure at the Planck scale (10 -35 m). Another feature of loop quantum gravity is that it gets rid of the Big-Bang singularity: our expanding universe may come from the bouncing of a previous contracting universe, in this theory the Big-Bang is replaced with a big bounce. The loop quantum theory predicts also the huge number of quantum states that accounts for the entropy of large black holes. (A.C.)
Terrestrial gravity data analysis for interim gravity model improvement
1987-01-01
This is the first status report for the Interim Gravity Model research effort that was started on June 30, 1986. The basic theme of this study is to develop appropriate models and adjustment procedures for estimating potential coefficients from terrestrial gravity data. The plan is to use the latest gravity data sets to produce coefficient estimates as well as to provide normal equations to NASA for use in the TOPEX/POSEIDON gravity field modeling program.
Stochastic subset selection for learning with kernel machines.
Rhinelander, Jason; Liu, Xiaoping P
2012-06-01
Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.
Multiple kernel boosting framework based on information measure for classification
International Nuclear Information System (INIS)
Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun
2016-01-01
The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Ling-Yu Duan
2010-01-01
Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Tian Yonghong
2010-01-01
Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.
Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong
2014-01-01
Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
Deep Restricted Kernel Machines Using Conjugate Feature Duality.
Suykens, Johan A K
2017-08-01
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.
Training Lp norm multiple kernel learning in the primal.
Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei
2013-10-01
Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.
Gravity Data for South America
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data (152,624 records) were compiled by the University of Texas at Dallas. This data base was received in June 1992. Principal gravity parameters...
Interior Alaska Gravity Station Data
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data total 9416 records. This data base was received in March 1997. Principal gravity parameters include Free-air Anomalies which have been...
Gravity Station Data for Spain
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data total 28493 records. This data base was received in April 1997. Principal gravity parameters include Free-air Anomalies which have been...
Gravity Station Data for Portugal
National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data total 3064 records. This data base was received in April 1997. Principal gravity parameters include Free-air Anomalies which have been...
Gradient-based adaptation of general gaussian kernels.
Glasmachers, Tobias; Igel, Christian
2005-10-01
Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.
On weights which admit the reproducing kernel of Bergman type
Directory of Open Access Journals (Sweden)
Zbigniew Pasternak-Winiarski
1992-01-01
Full Text Available In this paper we consider (1 the weights of integration for which the reproducing kernel of the Bergman type can be defined, i.e., the admissible weights, and (2 the kernels defined by such weights. It is verified that the weighted Bergman kernel has the analogous properties as the classical one. We prove several sufficient conditions and necessary and sufficient conditions for a weight to be an admissible weight. We give also an example of a weight which is not of this class. As a positive example we consider the weight μ(z=(Imz2 defined on the unit disk in ℂ.
Visualization of nonlinear kernel models in neuroimaging by sensitivity maps
DEFF Research Database (Denmark)
Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard
There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...
Flour quality and kernel hardness connection in winter wheat
Directory of Open Access Journals (Sweden)
Szabó B. P.
2016-12-01
Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.
Deep kernel learning method for SAR image target recognition
Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao
2017-10-01
With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.
Explicit signal to noise ratio in reproducing kernel Hilbert spaces
DEFF Research Database (Denmark)
Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo
2011-01-01
This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...
International Nuclear Information System (INIS)
Faria, F. F.
2014-01-01
We construct a massive theory of gravity that is invariant under conformal transformations. The massive action of the theory depends on the metric tensor and a scalar field, which are considered the only field variables. We find the vacuum field equations of the theory and analyze its weak-field approximation and Newtonian limit.
DEFF Research Database (Denmark)
Skielboe, Andreas
Gravity governs the evolution of the universe on the largest scales, and powers some of the most extreme objects at the centers of galaxies. Determining the masses and kinematics of galaxy clusters provides essential constraints on the large-scale structure of the universe, and act as direct probes...
Newburgh, Ronald
2010-01-01
It's both surprising and rewarding when an old, standard problem reveals a subtlety that expands its pedagogic value. I realized recently that the role of gravity in the range equation for a projectile is not so simple as first appears. This realization may be completely obvious to others but was quite new to me.
Discrete Lorentzian quantum gravity
Loll, R.
2000-01-01
Just as for non-abelian gauge theories at strong coupling, discrete lattice methods are a natural tool in the study of non-perturbative quantum gravity. They have to reflect the fact that the geometric degrees of freedom are dynamical, and that therefore also the lattice theory must be formulated
International Nuclear Information System (INIS)
Pullin, J.
2015-01-01
Loop quantum gravity is one of the approaches that are being studied to apply the rules of quantum mechanics to the gravitational field described by the theory of General Relativity . We present an introductory summary of the main ideas and recent results. (Author)
International Nuclear Information System (INIS)
Meszaros, A.
1984-05-01
In case the graviton has a very small non-zero mass, the existence of six additional massive gravitons with very big masses leads to a finite quantum gravity. There is an acausal behaviour on the scales that is determined by the masses of additional gravitons. (author)
Venus - Ishtar gravity anomaly
Sjogren, W. L.; Bills, B. G.; Mottinger, N. A.
1984-01-01
The gravity anomaly associated with Ishtar Terra on Venus is characterized, comparing line-of-sight acceleration profiles derived by differentiating Pioneer Venus Orbiter Doppler residual profiles with an Airy-compensated topographic model. The results are presented in graphs and maps, confirming the preliminary findings of Phillips et al. (1979). The isostatic compensation depth is found to be 150 + or - 30 km.
International Nuclear Information System (INIS)
Aros, Rodrigo; Contreras, Mauricio
2006-01-01
In this work the Poincare-Chern-Simons and anti-de Sitter-Chern-Simons gravities are studied. For both, a solution that can be cast as a black hole with manifest torsion is found. Those solutions resemble Schwarzschild and Schwarzschild-AdS solutions, respectively
International Nuclear Information System (INIS)
Williams, J.W.
1992-01-01
After a brief introduction to Regge calculus, some examples of its application is quantum gravity are described in this paper. In particular, the earliest such application, by Ponzano and Regge, is discussed in some detail and it is shown how this leads naturally to current work on invariants of three-manifolds
Directory of Open Access Journals (Sweden)
Rovelli Carlo
1998-01-01
Full Text Available The problem of finding the quantum theory of the gravitational field, and thus understanding what is quantum spacetime, is still open. One of the most active of the current approaches is loop quantum gravity. Loop quantum gravity is a mathematically well-defined, non-perturbative and background independent quantization of general relativity, with its conventional matter couplings. Research in loop quantum gravity today forms a vast area, ranging from mathematical foundations to physical applications. Among the most significant results obtained are: (i The computation of the physical spectra of geometrical quantities such as area and volume, which yields quantitative predictions on Planck-scale physics. (ii A derivation of the Bekenstein-Hawking black hole entropy formula. (iii An intriguing physical picture of the microstructure of quantum physical space, characterized by a polymer-like Planck scale discreteness. This discreteness emerges naturally from the quantum theory and provides a mathematically well-defined realization of Wheeler's intuition of a spacetime ``foam''. Long standing open problems within the approach (lack of a scalar product, over-completeness of the loop basis, implementation of reality conditions have been fully solved. The weak part of the approach is the treatment of the dynamics: at present there exist several proposals, which are intensely debated. Here, I provide a general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity, and a guide to the relevant literature.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C
Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.
Quantum Gravity Effects in Cosmology
Directory of Open Access Journals (Sweden)
Gu Je-An
2018-01-01
Full Text Available Within the geometrodynamic approach to quantum cosmology, we studied the quantum gravity effects in cosmology. The Gibbons-Hawking temperature is corrected by quantum gravity due to spacetime fluctuations and the power spectrum as well as any probe field will experience the effective temperature, a quantum gravity effect.
Even-dimensional topological gravity from Chern-Simons gravity
International Nuclear Information System (INIS)
Merino, N.; Perez, A.; Salgado, P.
2009-01-01
It is shown that the topological action for gravity in 2n-dimensions can be obtained from the (2n+1)-dimensional Chern-Simons gravity genuinely invariant under the Poincare group. The 2n-dimensional topological gravity is described by the dynamics of the boundary of a (2n+1)-dimensional Chern-Simons gravity theory with suitable boundary conditions. The field φ a , which is necessary to construct this type of topological gravity in even dimensions, is identified with the coset field associated with the non-linear realizations of the Poincare group ISO(d-1,1).
Directory of Open Access Journals (Sweden)
A. V. Vikulin
2014-01-01
Full Text Available Gravity phenomena related to the Earth movements in the Solar System and through the Galaxy are reviewed. Such movements are manifested by geological processes on the Earth and correlate with geophysical fields of the Earth. It is concluded that geodynamic processes and the gravity phenomena (including those of cosmic nature are related. The state of the geomedium composed of blocks is determined by stresses with force moment and by slow rotational waves that are considered as a new type of movements [Vikulin, 2008, 2010]. It is shown that the geomedium has typical rheid properties [Carey, 1954], specifically an ability to flow while being in the solid state [Leonov, 2008]. Within the framework of the rotational model with a symmetric stress tensor, which is developed by the authors [Vikulin, Ivanchin, 1998; Vikulin et al., 2012a, 2013], such movement of the geomedium may explain the energy-saturated state of the geomedium and a possibility of its movements in the form of vortex geological structures [Lee, 1928]. The article discusses the gravity wave detection method based on the concept of interactions between gravity waves and crustal blocks [Braginsky et al., 1985]. It is concluded that gravity waves can be recorded by the proposed technique that detects slow rotational waves. It is shown that geo-gravitational movements can be described by both the concept of potential with account of gravitational energy of bodies [Kondratyev, 2003] and the nonlinear physical acoustics [Gurbatov et al., 2008]. Based on the combined description of geophysical and gravitational wave movements, the authors suggest a hypothesis about the nature of spin, i.e. own moment as a demonstration of the space-time ‘vortex’ properties.
Efficient Online Subspace Learning With an Indefinite Kernel for Visual Tracking and Recognition
Liwicki, Stephan; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Pantic, Maja
2012-01-01
We propose an exact framework for online learning with a family of indefinite (not positive) kernels. As we study the case of nonpositive kernels, we first show how to extend kernel principal component analysis (KPCA) from a reproducing kernel Hilbert space to Krein space. We then formulate an
International Nuclear Information System (INIS)
Drozdowicz, K.
1995-01-01
A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs
On the covariant formalism of the effective field theory of gravity and leading order corrections
DEFF Research Database (Denmark)
Codello, Alessandro; Jain, Rajeev Kumar
2016-01-01
We construct the covariant effective field theory of gravity as an expansion in inverse powers of the Planck mass, identifying the leading and next-to-leading quantum corrections. We determine the form of the effective action for the cases of pure gravity with cosmological constant as well...... as gravity coupled to matter. By means of heat kernel methods we renormalize and compute the leading quantum corrections to quadratic order in a curvature expansion. The final effective action in our covariant formalism is generally non-local and can be readily used to understand the phenomenology...... on different spacetimes. In particular, we point out that on curved backgrounds the observable leading quantum gravitational effects are less suppressed than on Minkowski spacetime....
On the covariant formalism of the effective field theory of gravity and leading order corrections
International Nuclear Information System (INIS)
Codello, Alessandro; Jain, Rajeev Kumar
2016-01-01
We construct the covariant effective field theory of gravity as an expansion in inverse powers of the Planck mass, identifying the leading and next-to-leading quantum corrections. We determine the form of the effective action for the cases of pure gravity with cosmological constant as well as gravity coupled to matter. By means of heat kernel methods we renormalize and compute the leading quantum corrections to quadratic order in a curvature expansion. The final effective action in our covariant formalism is generally non-local and can be readily used to understand the phenomenology on different spacetimes. In particular, we point out that on curved backgrounds the observable leading quantum gravitational effects are less suppressed than on Minkowski spacetime. (paper)
A kernel adaptive algorithm for quaternion-valued inputs.
Paul, Thomas K; Ogunfunmi, Tokunbo
2015-10-01
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.
Bioconversion of palm kernel meal for aquaculture: Experiences ...
African Journals Online (AJOL)
SERVER
2008-04-17
Apr 17, 2008 ... es as well as food supplies have existed traditionally with coastal regions of Liberia and ..... Contamination of palm kernel meal with Aspergillus ... Sciences, Universiti Sains Malaysia, Penang 11800, Malaysia. Aquacult. Res.
The effect of apricot kernel flour incorporation on the ...
African Journals Online (AJOL)
STORAGESEVER
2009-01-05
Jan 5, 2009 ... 2Department of Food Engineering, Erciyes University 38039, Kayseri, Turkey. Accepted 27 ... Key words: Noodle; apricot kernel, flour, cooking, sensory properties. ... their simple preparation requirement, desirable sensory.
3-D waveform tomography sensitivity kernels for anisotropic media
Djebbi, Ramzi; Alkhalifah, Tariq Ali
2014-01-01
The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate
Kernel-based noise filtering of neutron detector signals
International Nuclear Information System (INIS)
Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki
2007-01-01
This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests
Linear and kernel methods for multivariate change detection
DEFF Research Database (Denmark)
Canty, Morton J.; Nielsen, Allan Aasbjerg
2012-01-01
), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...
Metastable gravity on classical defects
International Nuclear Information System (INIS)
Ringeval, Christophe; Rombouts, Jan-Willem
2005-01-01
We discuss the realization of metastable gravity on classical defects in infinite-volume extra dimensions. In dilatonic Einstein gravity, it is found that the existence of metastable gravity on the defect core requires violation of the dominant energy condition for codimension N c =2 defects. This is illustrated with a detailed analysis of a six-dimensional hyperstring minimally coupled to dilaton gravity. We present the general conditions under which a codimension N c >2 defect admits metastable modes, and find that they differ from lower codimensional models in that, under certain conditions, they do not require violation of energy conditions to support quasilocalized gravity
Resummed memory kernels in generalized system-bath master equations
International Nuclear Information System (INIS)
Mavros, Michael G.; Van Voorhis, Troy
2014-01-01
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics
On Improving Convergence Rates for Nonnegative Kernel Density Estimators
Terrell, George R.; Scott, David W.
1980-01-01
To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...
Improved Variable Window Kernel Estimates of Probability Densities
Hall, Peter; Hu, Tien Chung; Marron, J. S.
1995-01-01
Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...
Graphical analyses of connected-kernel scattering equations
International Nuclear Information System (INIS)
Picklesimer, A.
1982-10-01
Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The graphical method also leads to a new, simplified form for some members of the class and elucidates the general structural features of the entire class
MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX
International Nuclear Information System (INIS)
Brooks, E.D. III
1988-01-01
1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel
The Flux OSKit: A Substrate for Kernel and Language Research
1997-10-01
unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 tions. Our own microkernel -based OS, Fluke [17], puts almost all of the OSKit to use...kernels distance the language from the hardware; even microkernels and other extensible kernels enforce some default policy which often conflicts with a...be particu- larly useful in these research projects. 6.1.1 The Fluke OS In 1996 we developed an entirely new microkernel - based system called Fluke
Salus: Kernel Support for Secure Process Compartments
Directory of Open Access Journals (Sweden)
Raoul Strackx
2015-01-01
Full Text Available Consumer devices are increasingly being used to perform security and privacy critical tasks. The software used to perform these tasks is often vulnerable to attacks, due to bugs in the application itself or in included software libraries. Recent work proposes the isolation of security-sensitive parts of applications into protected modules, each of which can be accessed only through a predefined public interface. But most parts of an application can be considered security-sensitive at some level, and an attacker who is able to gain inapplication level access may be able to abuse services from protected modules. We propose Salus, a Linux kernel modification that provides a novel approach for partitioning processes into isolated compartments sharing the same address space. Salus significantly reduces the impact of insecure interfaces and vulnerable compartments by enabling compartments (1 to restrict the system calls they are allowed to perform, (2 to authenticate their callers and callees and (3 to enforce that they can only be accessed via unforgeable references. We describe the design of Salus, report on a prototype implementation and evaluate it in terms of security and performance. We show that Salus provides a significant security improvement with a low performance overhead, without relying on any non-standard hardware support.
Local Kernel for Brains Classification in Schizophrenia
Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.
In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.
KERNEL MAD ALGORITHM FOR RELATIVE RADIOMETRIC NORMALIZATION
Directory of Open Access Journals (Sweden)
Y. Bai
2016-06-01
Full Text Available The multivariate alteration detection (MAD algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA. The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1 data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.
Statistical mechanics, gravity, and Euclidean theory
International Nuclear Information System (INIS)
Fursaev, Dmitri V.
2002-01-01
A review of computations of free energy for Gibbs states on stationary but not static gravitational and gauge backgrounds is given. On these backgrounds wave equations for free fields are reduced to eigenvalue problems which depend non-linearly on the spectral parameter. We present a method to deal with such problems. In particular, we demonstrate how some results of the spectral theory of second-order elliptic operators, such as heat kernel asymptotics, can be extended to a class of non-linear spectral problems. The method is used to trace down the relation between the canonical definition of the free energy based on summation over the modes and the covariant definition given in Euclidean quantum gravity. As an application, high-temperature asymptotics of the free energy and of the thermal part of the stress-energy tensor in the presence of rotation are derived. We also discuss statistical mechanics in the presence of Killing horizons where canonical and Euclidean theories are related in a non-trivial way
Quantum gravity from noncommutative spacetime
International Nuclear Information System (INIS)
Lee, Jungjai; Yang, Hyunseok
2014-01-01
We review a novel and authentic way to quantize gravity. This novel approach is based on the fact that Einstein gravity can be formulated in terms of a symplectic geometry rather than a Riemannian geometry in the context of emergent gravity. An essential step for emergent gravity is to realize the equivalence principle, the most important property in the theory of gravity (general relativity), from U(1) gauge theory on a symplectic or Poisson manifold. Through the realization of the equivalence principle, which is an intrinsic property in symplectic geometry known as the Darboux theorem or the Moser lemma, one can understand how diffeomorphism symmetry arises from noncommutative U(1) gauge theory; thus, gravity can emerge from the noncommutative electromagnetism, which is also an interacting theory. As a consequence, a background-independent quantum gravity in which the prior existence of any spacetime structure is not a priori assumed but is defined by using the fundamental ingredients in quantum gravity theory can be formulated. This scheme for quantum gravity can be used to resolve many notorious problems in theoretical physics, such as the cosmological constant problem, to understand the nature of dark energy, and to explain why gravity is so weak compared to other forces. In particular, it leads to a remarkable picture of what matter is. A matter field, such as leptons and quarks, simply arises as a stable localized geometry, which is a topological object in the defining algebra (noncommutative *-algebra) of quantum gravity.
Quantum gravity from noncommutative spacetime
Energy Technology Data Exchange (ETDEWEB)
Lee, Jungjai [Daejin University, Pocheon (Korea, Republic of); Yang, Hyunseok [Korea Institute for Advanced Study, Seoul (Korea, Republic of)
2014-12-15
We review a novel and authentic way to quantize gravity. This novel approach is based on the fact that Einstein gravity can be formulated in terms of a symplectic geometry rather than a Riemannian geometry in the context of emergent gravity. An essential step for emergent gravity is to realize the equivalence principle, the most important property in the theory of gravity (general relativity), from U(1) gauge theory on a symplectic or Poisson manifold. Through the realization of the equivalence principle, which is an intrinsic property in symplectic geometry known as the Darboux theorem or the Moser lemma, one can understand how diffeomorphism symmetry arises from noncommutative U(1) gauge theory; thus, gravity can emerge from the noncommutative electromagnetism, which is also an interacting theory. As a consequence, a background-independent quantum gravity in which the prior existence of any spacetime structure is not a priori assumed but is defined by using the fundamental ingredients in quantum gravity theory can be formulated. This scheme for quantum gravity can be used to resolve many notorious problems in theoretical physics, such as the cosmological constant problem, to understand the nature of dark energy, and to explain why gravity is so weak compared to other forces. In particular, it leads to a remarkable picture of what matter is. A matter field, such as leptons and quarks, simply arises as a stable localized geometry, which is a topological object in the defining algebra (noncommutative *-algebra) of quantum gravity.
DEFF Research Database (Denmark)
Forsberg, René; Sideris, M.G.; Shum, C.K.
2005-01-01
The gravity field of the earth is a natural element of the Global Geodetic Observing System (GGOS). Gravity field quantities are like spatial geodetic observations of potential very high accuracy, with measurements, currently at part-per-billion (ppb) accuracy, but gravity field quantities are also...... unique as they can be globally represented by harmonic functions (long-wavelength geopotential model primarily from satellite gravity field missions), or based on point sampling (airborne and in situ absolute and superconducting gravimetry). From a GGOS global perspective, one of the main challenges...... is to ensure the consistency of the global and regional geopotential and geoid models, and the temporal changes of the gravity field at large spatial scales. The International Gravity Field Service, an umbrella "level-2" IAG service (incorporating the International Gravity Bureau, International Geoid Service...
An Ensemble Approach to Building Mercer Kernels with Prior Information
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2005-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.
A new discrete dipole kernel for quantitative susceptibility mapping.
Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian
2018-09-01
Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.
Exploration of Shorea robusta (Sal seeds, kernels and its oil
Directory of Open Access Journals (Sweden)
Shashi Kumar C.
2016-12-01
Full Text Available Physical, mechanical, and chemical properties of Shorea robusta seed with wing, seed without wing, and kernel were investigated in the present work. The physico-chemical composition of sal oil was also analyzed. The physico-mechanical properties and proximate composition of seed with wing, seed without wing, and kernel at three moisture contents of 9.50% (w.b, 9.54% (w.b, and 12.14% (w.b, respectively, were studied. The results show that the moisture content of the kernel was highest as compared to seed with wing and seed without wing. The sphericity of the kernel was closer to that of a sphere as compared to seed with wing and seed without wing. The hardness of the seed with wing (32.32, N/mm and seed without wing (42.49, N/mm was lower than the kernels (72.14, N/mm. The proximate composition such as moisture, protein, carbohydrates, oil, crude fiber, and ash content were also determined. The kernel (30.20%, w/w contains higher oil percentage as compared to seed with wing and seed without wing. The scientific data from this work are important for designing of equipment and processes for post-harvest value addition of sal seeds.
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code
International Nuclear Information System (INIS)
Rothenstein, W.
1999-01-01
In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T
Proteome analysis of the almond kernel (Prunus dulcis).
Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu
2016-08-01
Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Evaluation of gravitational curvatures of a tesseroid in spherical integral kernels
Deng, Xiao-Le; Shen, Wen-Bin
2018-04-01
Proper understanding of how the Earth's mass distributions and redistributions influence the Earth's gravity field-related functionals is crucial for numerous applications in geodesy, geophysics and related geosciences. Calculations of the gravitational curvatures (GC) have been proposed in geodesy in recent years. In view of future satellite missions, the sixth-order developments of the gradients are becoming requisite. In this paper, a set of 3D integral GC formulas of a tesseroid mass body have been provided by spherical integral kernels in the spatial domain. Based on the Taylor series expansion approach, the numerical expressions of the 3D GC formulas are provided up to sixth order. Moreover, numerical experiments demonstrate the correctness of the 3D Taylor series approach for the GC formulas with order as high as sixth order. Analogous to other gravitational effects (e.g., gravitational potential, gravity vector, gravity gradient tensor), numerically it is found that there exist the very-near-area problem and polar singularity problem in the GC east-east-radial, north-north-radial and radial-radial-radial components in spatial domain, and compared to the other gravitational effects, the relative approximation errors of the GC components are larger due to not only the influence of the geocentric distance but also the influence of the latitude. This study shows that the magnitude of each term for the nonzero GC functionals by a grid resolution 15^' } } × 15^' }} at GOCE satellite height can reach of about 10^{-16} m^{-1} s2 for zero order, 10^{-24 } or 10^{-23} m^{-1} s2 for second order, 10^{-29} m^{-1} s2 for fourth order and 10^{-35} or 10^{-34} m^{-1} s2 for sixth order, respectively.
CERN. Geneva
2017-01-01
Extensions of Einstein’s theory of General Relativity are under investigation as a potential explanation of the accelerating expansion rate of the universe. I’ll present a cosmologist’s overview of attempts to test these ideas in an efficient and unbiased manner. I’ll start by introducing the bestiary of alternative gravity theories that have been put forwards. This proliferation of models motivates us to develop model-independent, agnostic tools for comparing the theory space to cosmological data. I’ll introduce the effective field theory for cosmological perturbations, a framework designed to unify modified gravity theories in terms of a manageable set of parameters. Having outlined the formalism, I’ll talk about the current constraints on this framework, and the improvements expected from the next generation of large galaxy clustering, weak lensing and intensity mapping experiments.
The relativistic gravity train
Seel, Max
2018-05-01
The gravity train that takes 42.2 min from any point A to any other point B that is connected by a straight-line tunnel through Earth has captured the imagination more than most other applications in calculus or introductory physics courses. Brachystochron and, most recently, nonlinear density solutions have been discussed. Here relativistic corrections are presented. It is discussed how the corrections affect the time to fall through Earth, the Sun, a white dwarf, a neutron star, and—the ultimate limit—the difference in time measured by a moving, a stationary and the fiducial observer at infinity if the density of the sphere approaches the density of a black hole. The relativistic gravity train can serve as a problem with approximate and exact analytic solutions and as numerical exercise in any introductory course on relativity.
International Nuclear Information System (INIS)
Brown, R.E.; Camp, J.B.; Darling, T.W.
1990-01-01
An experiment is being developed to measure the acceleration of the antiproton in the gravitational field of the earth. Antiprotons of a few MeV from the LEAR facility at CERN will be slowed, captured, cooled to a temperature of about 10 K, and subsequently launched a few at a time into a drift tube where the effect of gravity on their motion will be determined by a time-of-flight method. Development of the experiment is proceeding at Los Alamos using normal matter. The fabrication of a drift tube that will produce a region of space in which gravity is the dominant force on moving ions is of major difficulty. This involves a study of methods of minimizing the electric fields produced by spatially varying work functions on conducting surfaces. Progress in a number of areas is described, with stress on the drift-tube development
Gomberoff, Andres
2006-01-01
The 2002 Pan-American Advanced Studies Institute School on Quantum Gravity was held at the Centro de Estudios Cientificos (CECS),Valdivia, Chile, January 4-14, 2002. The school featured lectures by ten speakers, and was attended by nearly 70 students from over 14 countries. A primary goal was to foster interaction and communication between participants from different cultures, both in the layman’s sense of the term and in terms of approaches to quantum gravity. We hope that the links formed by students and the school will persist throughout their professional lives, continuing to promote interaction and the essential exchange of ideas that drives research forward. This volume contains improved and updated versions of the lectures given at the School. It has been prepared both as a reminder for the participants, and so that these pedagogical introductions can be made available to others who were unable to attend. We expect them to serve students of all ages well.
Energy Technology Data Exchange (ETDEWEB)
Lamon, Raphael
2010-06-29
Quantum gravity is an attempt to unify general relativity with quantum mechanics which are the two highly successful fundamental theories of theoretical physics. The main difficulty in this unification arises from the fact that, while general relativity describes gravity as a macroscopic geometrical theory, quantum mechanics explains microscopic phenomena. As a further complication, not only do both theories describe different scales but also their philosophical ramifications and the mathematics used to describe them differ in a dramatic way. Consequently, one possible starting point of an attempt at a unification is quantum mechanics, i.e. particle physics, and try to incorporate gravitation. This pathway has been chosen by particle physicists which led to string theory. On the other hand, loop quantum gravity (LQG) chooses the other possibility, i.e. it takes the geometrical aspects of gravity seriously and quantizes geometry. The first part of this thesis deals with a generalization of loop quantum cosmology (LQC) to toroidal topologies. LQC is a quantization of homogenous solutions of Einstein's field equations using tools from LQG. First the general concepts of closed topologies is introduced with special emphasis on Thurston's theorem and its consequences. It is shown that new degrees of freedom called Teichmueller parameters come into play and their dynamics can be described by a Hamiltonian. Several numerical solutions for a toroidal universe are presented and discussed. Following the guidelines of LQG this dynamics are rewritten using the Ashtekar variables and numerical solutions are shown. However, in order to find a suitable Hilbert space a canonical transformation must be performed. On the other hand this transformation makes the quantization of geometrical quantities less tractable such that two different ways are presented. It is shown that in both cases the spectrum of such geometrical operators depends on the initial value problem
Energy Technology Data Exchange (ETDEWEB)
Chatzistavrakidis, Athanasios [Van Swinderen Institute for Particle Physics and Gravity, University of Groningen,Nijenborgh 4, 9747 AG Groningen (Netherlands); Khoo, Fech Scen [Department of Physics and Earth Sciences, Jacobs University Bremen,Campus Ring 1, 28759 Bremen (Germany); Roest, Diederik [Van Swinderen Institute for Particle Physics and Gravity, University of Groningen,Nijenborgh 4, 9747 AG Groningen (Netherlands); Schupp, Peter [Department of Physics and Earth Sciences, Jacobs University Bremen,Campus Ring 1, 28759 Bremen (Germany)
2017-03-13
The particular structure of Galileon interactions allows for higher-derivative terms while retaining second order field equations for scalar fields and Abelian p-forms. In this work we introduce an index-free formulation of these interactions in terms of two sets of Grassmannian variables. We employ this to construct Galileon interactions for mixed-symmetry tensor fields and coupled systems thereof. We argue that these tensors are the natural generalization of scalars with Galileon symmetry, similar to p-forms and scalars with a shift-symmetry. The simplest case corresponds to linearised gravity with Lovelock invariants, relating the Galileon symmetry to diffeomorphisms. Finally, we examine the coupling of a mixed-symmetry tensor to gravity, and demonstrate in an explicit example that the inclusion of appropriate counterterms retains second order field equations.
International Nuclear Information System (INIS)
Lamon, Raphael
2010-01-01
Quantum gravity is an attempt to unify general relativity with quantum mechanics which are the two highly successful fundamental theories of theoretical physics. The main difficulty in this unification arises from the fact that, while general relativity describes gravity as a macroscopic geometrical theory, quantum mechanics explains microscopic phenomena. As a further complication, not only do both theories describe different scales but also their philosophical ramifications and the mathematics used to describe them differ in a dramatic way. Consequently, one possible starting point of an attempt at a unification is quantum mechanics, i.e. particle physics, and try to incorporate gravitation. This pathway has been chosen by particle physicists which led to string theory. On the other hand, loop quantum gravity (LQG) chooses the other possibility, i.e. it takes the geometrical aspects of gravity seriously and quantizes geometry. The first part of this thesis deals with a generalization of loop quantum cosmology (LQC) to toroidal topologies. LQC is a quantization of homogenous solutions of Einstein's field equations using tools from LQG. First the general concepts of closed topologies is introduced with special emphasis on Thurston's theorem and its consequences. It is shown that new degrees of freedom called Teichmueller parameters come into play and their dynamics can be described by a Hamiltonian. Several numerical solutions for a toroidal universe are presented and discussed. Following the guidelines of LQG this dynamics are rewritten using the Ashtekar variables and numerical solutions are shown. However, in order to find a suitable Hilbert space a canonical transformation must be performed. On the other hand this transformation makes the quantization of geometrical quantities less tractable such that two different ways are presented. It is shown that in both cases the spectrum of such geometrical operators depends on the initial value problem. Furthermore, we
International Nuclear Information System (INIS)
Hartle, J.B.
1985-01-01
Simplicial approximation and the ideas associated with the Regge calculus provide a concrete way of implementing a sum over histories formulation of quantum gravity. A simplicial geometry is made up of flat simplices joined together in a prescribed way together with an assignment of lengths to their edges. A sum over simplicial geometries is a sum over the different ways the simplices can be joined together with an integral over their edge lengths. The construction of the simplicial Euclidean action for this approach to quantum general relativity is illustrated. The recovery of the diffeomorphism group in the continuum limit is discussed. Some possible classes of simplicial complexes with which to define a sum over topologies are described. In two dimensional quantum gravity it is argued that a reasonable class is the class of pseudomanifolds
International Nuclear Information System (INIS)
Konopleva, N.P.
1996-01-01
The problems of application of nonperturbative quantization methods in the theories of the gauge fields and gravity are discussed. Unification of interactions is considered in the framework of the geometrical gauge fields theory. Vacuum conception in the unified theory of interactions and instantons role in the vacuum structure are analyzed. The role of vacuum solutions of Einstein equations in definition of the gauge field vacuum is demonstrated
Gravity, Time, and Lagrangians
Huggins, Elisha
2010-01-01
Feynman mentioned to us that he understood a topic in physics if he could explain it to a college freshman, a high school student, or a dinner guest. Here we will discuss two topics that took us a while to get to that level. One is the relationship between gravity and time. The other is the minus sign that appears in the Lagrangian. (Why would one…
Spontaneously generated gravity
International Nuclear Information System (INIS)
Zee, A.
1981-01-01
We show, following a recent suggestion of Adler, that gravity may arise as a consequence of dynamical symmetry breaking in a scale- and gauge-invariant world. Our calculation is not tied to any specific scheme of dynamical symmetry breaking. A representation for Newton's coupling constant in terms of flat-space quantities is derived. The sign of Newton's coupling constant appears to depend on infrared details of the symmetry-breaking mechanism
Rovelli, Carlo
2008-01-01
The problem of describing the quantum behavior of gravity, and thus understanding quantum spacetime , is still open. Loop quantum gravity is a well-developed approach to this problem. It is a mathematically well-defined background-independent quantization of general relativity, with its conventional matter couplings. Today research in loop quantum gravity forms a vast area, ranging from mathematical foundations to physical applications. Among the most significant results obtained so far are: (i) The computation of the spectra of geometrical quantities such as area and volume, which yield tentative quantitative predictions for Planck-scale physics. (ii) A physical picture of the microstructure of quantum spacetime, characterized by Planck-scale discreteness. Discreteness emerges as a standard quantum effect from the discrete spectra, and provides a mathematical realization of Wheeler's "spacetime foam" intuition. (iii) Control of spacetime singularities, such as those in the interior of black holes and the cosmological one. This, in particular, has opened up the possibility of a theoretical investigation into the very early universe and the spacetime regions beyond the Big Bang. (iv) A derivation of the Bekenstein-Hawking black-hole entropy. (v) Low-energy calculations, yielding n -point functions well defined in a background-independent context. The theory is at the roots of, or strictly related to, a number of formalisms that have been developed for describing background-independent quantum field theory, such as spin foams, group field theory, causal spin networks, and others. I give here a general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity, and a guide to the relevant literature.
Directory of Open Access Journals (Sweden)
Rovelli Carlo
2008-07-01
Full Text Available The problem of describing the quantum behavior of gravity, and thus understanding quantum spacetime, is still open. Loop quantum gravity is a well-developed approach to this problem. It is a mathematically well-defined background-independent quantization of general relativity, with its conventional matter couplings. Today research in loop quantum gravity forms a vast area, ranging from mathematical foundations to physical applications. Among the most significant results obtained so far are: (i The computation of the spectra of geometrical quantities such as area and volume, which yield tentative quantitative predictions for Planck-scale physics. (ii A physical picture of the microstructure of quantum spacetime, characterized by Planck-scale discreteness. Discreteness emerges as a standard quantum effect from the discrete spectra, and provides a mathematical realization of Wheeler’s “spacetime foam” intuition. (iii Control of spacetime singularities, such as those in the interior of black holes and the cosmological one. This, in particular, has opened up the possibility of a theoretical investigation into the very early universe and the spacetime regions beyond the Big Bang. (iv A derivation of the Bekenstein–Hawking black-hole entropy. (v Low-energy calculations, yielding n-point functions well defined in a background-independent context. The theory is at the roots of, or strictly related to, a number of formalisms that have been developed for describing background-independent quantum field theory, such as spin foams, group field theory, causal spin networks, and others. I give here a general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity, and a guide to the relevant literature.
Semiclassical unimodular gravity
International Nuclear Information System (INIS)
Fiol, Bartomeu; Garriga, Jaume
2010-01-01
Classically, unimodular gravity is known to be equivalent to General Relativity (GR), except for the fact that the effective cosmological constant Λ has the status of an integration constant. Here, we explore various formulations of unimodular gravity beyond the classical limit. We first consider the non-generally covariant action formulation in which the determinant of the metric is held fixed to unity. We argue that the corresponding quantum theory is also equivalent to General Relativity for localized perturbative processes which take place in generic backgrounds of infinite volume (such as asymptotically flat spacetimes). Next, using the same action, we calculate semiclassical non-perturbative quantities, which we expect will be dominated by Euclidean instanton solutions. We derive the entropy/area ratio for cosmological and black hole horizons, finding agreement with GR for solutions in backgrounds of infinite volume, but disagreement for backgrounds with finite volume. In deriving the above results, the path integral is taken over histories with fixed 4-volume. We point out that the results are different if we allow the 4-volume of the different histories to vary over a continuum range. In this ''generalized'' version of unimodular gravity, one recovers the full set of Einstein's equations in the classical limit, including the trace, so Λ is no longer an integration constant. Finally, we consider the generally covariant theory due to Henneaux and Teitelboim, which is classically equivalent to unimodular gravity. In this case, the standard semiclassical GR results are recovered provided that the boundary term in the Euclidean action is chosen appropriately
Granular Superconductors and Gravity
Noever, David; Koczor, Ron
1999-01-01
As a Bose condensate, superconductors provide novel conditions for revisiting previously proposed couplings between electromagnetism and gravity. Strong variations in Cooper pair density, large conductivity and low magnetic permeability define superconductive and degenerate condensates without the traditional density limits imposed by the Fermi energy (approx. 10(exp -6) g cu cm). Recent experiments have reported anomalous weight loss for a test mass suspended above a rotating Type II, YBCO superconductor, with a relatively high percentage change (0.05-2.1%) independent of the test mass' chemical composition and diamagnetic properties. A variation of 5 parts per 104 was reported above a stationary (non-rotating) superconductor. In experiments using a sensitive gravimeter, bulk YBCO superconductors were stably levitated in a DC magnetic field and exposed without levitation to low-field strength AC magnetic fields. Changes in observed gravity signals were measured to be less than 2 parts in 108 of the normal gravitational acceleration. Given the high sensitivity of the test, future work will examine variants on the basic magnetic behavior of granular superconductors, with particular focus on quantifying their proposed importance to gravity.
Sjogren, W. L.; Ananda, M.; Williams, B. G.; Birkeland, P. W.; Esposito, P. S.; Wimberly, R. N.; Ritke, S. J.
1981-01-01
Results of Pioneer Venus Orbiter observations concerning the gravity field of Venus are presented. The gravitational data was obtained from reductions of Doppler radio tracking data for the Orbiter, which is in a highly eccentric orbit with periapsis altitude varying from 145 to 180 km and nearly fixed periapsis latitude of 15 deg N. The global gravity field was obtained through the simultaneous estimation of the orbit state parameters and gravity coefficients from long-period variations in orbital element rates. The global field has been described with sixth degree and order spherical harmonic coefficients, which are capable of resolving the three major topographical features on Venus. Local anomalies have been mapped using line-of-sight accelerations derived from the Doppler residuals between 40 deg N and 10 deg S latitude at approximately 300 km spatial resolution. Gravitational data is observed to correspond to topographical data obtained by radar altimeter, with most of the gravitational anomalies about 20-30 milligals. Simulations evaluating the isostatic states of two topographic features indicate that at least partial isostasy prevails, with the possibility of complete compensation.
Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib
2016-01-01
Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and
DEFF Research Database (Denmark)
Petersen, Annette
of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...
Polar gravity fields from GOCE and airborne gravity
DEFF Research Database (Denmark)
Forsberg, René; Olesen, Arne Vestergaard; Yidiz, Hasan
2011-01-01
Airborne gravity, together with high-quality surface data and ocean satellite altimetric gravity, may supplement GOCE to make consistent, accurate high resolution global gravity field models. In the polar regions, the special challenge of the GOCE polar gap make the error characteristics...... of combination models especially sensitive to the correct merging of satellite and surface data. We outline comparisons of GOCE to recent airborne gravity surveys in both the Arctic and the Antarctic. The comparison is done to new 8-month GOCE solutions, as well as to a collocation prediction from GOCE gradients...... in Antarctica. It is shown how the enhanced gravity field solutions improve the determination of ocean dynamic topography in both the Arctic and in across the Drake Passage. For the interior of Antarctica, major airborne gravity programs are currently being carried out, and there is an urgent need...
Gravity signatures of terrane accretion
Franco, Heather; Abbott, Dallas
1999-01-01
In modern collisional environments, accreted terranes are bracketed by forearc gravity lows, a gravitational feature which results from the abandonment of the original trench and the initiation of a new trench seaward of the accreted terrane. The size and shape of the gravity low depends on the type of accreted feature and the strength of the formerly subducting plate. Along the Central American trench, the accretion of Gorgona Island caused a seaward trench jump of 48 to 66 km. The relict trench axes show up as gravity lows behind the trench with minimum values of -78 mgal (N of Gorgona) and -49 mgal (S of Gorgona) respectively. These forearc gravity lows have little or no topographic expression. The active trench immediately seaward of these forearc gravity lows has minimum gravity values of -59 mgal (N of Gorgona) and -58 mgal (S of Gorgona), respectively. In the north, the active trench has a less pronounced gravity low than the sediment covered forearc. In the Mariana arc, two Cretaceous seamounts have been accreted to the Eocene arc. The northern seamount is most likely a large block, the southern seamount may be a thrust slice. These more recent accretion events have produced modest forearc topographic and gravity lows in comparison with the topographic and gravity lows within the active trench. However, the minimum values of the Mariana forearc gravity lows are modest only by comparison to the Mariana Trench (-216 mgal); their absolute values are more negative than at Gorgona Island (-145 to -146 mgal). We speculate that the forearc gravity lows and seaward trench jumps near Gorgona Island were produced by the accretion of a hotspot island from a strong plate. The Mariana gravity lows and seaward trench jumps (or thrust slices) were the result of breaking a relatively weak plate close to the seamount edifice. These gravity lows resulting from accretion events should be preserved in older accreted terranes.
Local coding based matching kernel method for image classification.
Directory of Open Access Journals (Sweden)
Yan Song
Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
Generalized synthetic kernel approximation for elastic moderation of fast neutrons
International Nuclear Information System (INIS)
Yamamoto, Koji; Sekiya, Tamotsu; Yamamura, Yasunori.
1975-01-01
A method of synthetic kernel approximation is examined in some detail with a view to simplifying the treatment of the elastic moderation of fast neutrons. A sequence of unified kernel (fsub(N)) is introduced, which is then divided into two subsequences (Wsub(n)) and (Gsub(n)) according to whether N is odd (Wsub(n)=fsub(2n-1), n=1,2, ...) or even (Gsub(n)=fsub(2n), n=0,1, ...). The W 1 and G 1 kernels correspond to the usual Wigner and GG kernels, respectively, and the Wsub(n) and Gsub(n) kernels for n>=2 represent generalizations thereof. It is shown that the Wsub(n) kernel solution with a relatively small n (>=2) is superior on the whole to the Gsub(n) kernel solution for the same index n, while both converge to the exact values with increasing n. To evaluate the collision density numerically and rapidly, a simple recurrence formula is derived. In the asymptotic region (except near resonances), this recurrence formula allows calculation with a relatively coarse mesh width whenever hsub(a)<=0.05 at least. For calculations in the transient lethargy region, a mesh width of order epsilon/10 is small enough to evaluate the approximate collision density psisub(N) with an accuracy comparable to that obtained analytically. It is shown that, with the present method, an order of approximation of about n=7 should yield a practically correct solution diviating not more than 1% in collision density. (auth.)
Unsupervised multiple kernel learning for heterogeneous data integration.
Mariette, Jérôme; Villa-Vialaneix, Nathalie
2018-03-15
Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.
Collision kernels in the eikonal approximation for Lennard-Jones interaction potential
International Nuclear Information System (INIS)
Zielinska, S.
1985-03-01
The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)
Cosmological tests of modified gravity.
Koyama, Kazuya
2016-04-01
We review recent progress in the construction of modified gravity models as alternatives to dark energy as well as the development of cosmological tests of gravity. Einstein's theory of general relativity (GR) has been tested accurately within the local universe i.e. the Solar System, but this leaves the possibility open that it is not a good description of gravity at the largest scales in the Universe. This being said, the standard model of cosmology assumes GR on all scales. In 1998, astronomers made the surprising discovery that the expansion of the Universe is accelerating, not slowing down. This late-time acceleration of the Universe has become the most challenging problem in theoretical physics. Within the framework of GR, the acceleration would originate from an unknown dark energy. Alternatively, it could be that there is no dark energy and GR itself is in error on cosmological scales. In this review, we first give an overview of recent developments in modified gravity theories including f(R) gravity, braneworld gravity, Horndeski theory and massive/bigravity theory. We then focus on common properties these models share, such as screening mechanisms they use to evade the stringent Solar System tests. Once armed with a theoretical knowledge of modified gravity models, we move on to discuss how we can test modifications of gravity on cosmological scales. We present tests of gravity using linear cosmological perturbations and review the latest constraints on deviations from the standard [Formula: see text]CDM model. Since screening mechanisms leave distinct signatures in the non-linear structure formation, we also review novel astrophysical tests of gravity using clusters, dwarf galaxies and stars. The last decade has seen a number of new constraints placed on gravity from astrophysical to cosmological scales. Thanks to on-going and future surveys, cosmological tests of gravity will enjoy another, possibly even more, exciting ten years.
Bivariate discrete beta Kernel graduation of mortality data.
Mazza, Angelo; Punzo, Antonio
2015-07-01
Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.
Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.
Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao
2017-06-21
In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.
Mixed kernel function support vector regression for global sensitivity analysis
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
On flame kernel formation and propagation in premixed gases
Energy Technology Data Exchange (ETDEWEB)
Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)
2010-12-15
Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)
Insights from Classifying Visual Concepts with Multiple Kernel Learning
Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki
2012-01-01
Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970
Semi-supervised learning for ordinal Kernel Discriminant Analysis.
Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C
2016-12-01
Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kernel Methods for Mining Instance Data in Ontologies
Bloehdorn, Stephan; Sure, York
The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.
Norsk, P.; Shelhamer, M.
2016-01-01
This panel will present NASA's plans for ongoing and future research to define the requirements for Artificial Gravity (AG) as a countermeasure against the negative health effects of long-duration weightlessness. AG could mitigate the gravity-sensitive effects of spaceflight across a host of physiological systems. Bringing gravity to space could mitigate the sensorimotor and neuro-vestibular disturbances induced by G-transitions upon reaching a planetary body, and the cardiovascular deconditioning and musculoskeletal weakness induced by weightlessness. Of particular interest for AG during deep-space missions is mitigation of the Visual Impairment Intracranial Pressure (VIIP) syndrome that the majority of astronauts exhibit in space to varying degrees, and which presumably is associated with weightlessness-induced fluid shift from lower to upper body segments. AG could be very effective for reversing the fluid shift and thus help prevent VIIP. The first presentation by Dr. Charles will summarize some of the ground-based and (very little) space-based research that has been conducted on AG by the various space programs. Dr. Paloski will address the use of AG during deep-space exploration-class missions and describe the different AG scenarios such as intra-vehicular, part-of-vehicle, or whole-vehicle centrifugations. Dr. Clement will discuss currently planned NASA research as well as how to coordinate future activities among NASA's international partners. Dr. Barr will describe some possible future plans for using space- and ground-based partial-G analogs to define the relationship between physiological responses and G levels between 0 and 1. Finally, Dr. Stenger will summarize how the human cardiovascular system could benefit from intermittent short-radius centrifugations during long-duration missions.
Directory of Open Access Journals (Sweden)
Shan Gao
2011-04-01
Full Text Available The remarkable connections between gravity and thermodynamics seem to imply that gravity is not fundamental but emergent, and in particular, as Verlinde suggested, gravity is probably an entropic force. In this paper, we will argue that the idea of gravity as an entropic force is debatable. It is shown that there is no convincing analogy between gravity and entropic force in Verlinde’s example. Neither holographic screen nor test particle satisfies all requirements for the existence of entropic force in a thermodynamics system. Furthermore, we show that the entropy increase of the screen is not caused by its statistical tendency to increase entropy as required by the existence of entropic force, but in fact caused by gravity. Therefore, Verlinde’s argument for the entropic origin of gravity is problematic. In addition, we argue that the existence of a minimum size of spacetime, together with the Heisenberg uncertainty principle in quantum theory, may imply the fundamental existence of gravity as a geometric property of spacetime. This may provide a further support for the conclusion that gravity is not an entropic force.
Active Response Gravity Offload System
Valle, Paul; Dungan, Larry; Cunningham, Thomas; Lieberman, Asher; Poncia, Dina
2011-01-01
The Active Response Gravity Offload System (ARGOS) provides the ability to simulate with one system the gravity effect of planets, moons, comets, asteroids, and microgravity, where the gravity is less than Earth fs gravity. The system works by providing a constant force offload through an overhead hoist system and horizontal motion through a rail and trolley system. The facility covers a 20 by 40-ft (approximately equals 6.1 by 12.2m) horizontal area with 15 ft (approximately equals4.6 m) of lifting vertical range.
Teleparallel equivalent of Lovelock gravity
González, P. A.; Vásquez, Yerko
2015-12-01
There is a growing interest in modified gravity theories based on torsion, as these theories exhibit interesting cosmological implications. In this work inspired by the teleparallel formulation of general relativity, we present its extension to Lovelock gravity known as the most natural extension of general relativity in higher-dimensional space-times. First, we review the teleparallel equivalent of general relativity and Gauss-Bonnet gravity, and then we construct the teleparallel equivalent of Lovelock gravity. In order to achieve this goal, we use the vielbein and the connection without imposing the Weitzenböck connection. Then, we extract the teleparallel formulation of the theory by setting the curvature to null.
International Nuclear Information System (INIS)
Aldama, Mariana Espinosa
2015-01-01
The gravity apple tree is a genealogical tree of the gravitation theories developed during the past century. The graphic representation is full of information such as guides in heuristic principles, names of main proponents, dates and references for original articles (See under Supplementary Data for the graphic representation). This visual presentation and its particular classification allows a quick synthetic view for a plurality of theories, many of them well validated in the Solar System domain. Its diachronic structure organizes information in a shape of a tree following similarities through a formal concept analysis. It can be used for educational purposes or as a tool for philosophical discussion. (paper)
Airborne Gravity: NGS' Gravity Data for AN05 (2011)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2011 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...
Airborne Gravity: NGS' Gravity Data for AN06 (2011)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2011 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...
Airborne Gravity: NGS' Gravity Data for CS08 (2015)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for CS08 collected in 2006 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...
Airborne Gravity: NGS' Gravity Data for AS02 (2010)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2010 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...
Airborne Gravity: NGS' Gravity Data for ES02 (2013)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Florida and the Gulf of Mexico collected in 2013 over 1 survey. This data set is part of the Gravity for the Re-definition of the American...
Airborne Gravity: NGS' Gravity Data for AN04 (2010)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2010 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...
Airborne Gravity: NGS' Gravity Data for CS05 (2014)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Texas collected in 2014 over 2 surveys. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...
Airborne Gravity: NGS' Gravity Data for CS07 (2014 & 2016)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Texas collected in 2014 & 2016 over 3 surveys,TX14-2, TX16-1 and TX16-2. This data set is part of the Gravity for the Re-definition of...
Airborne Gravity: NGS' Gravity Data for AS01 (2008)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2008 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...
Airborne Gravity: NGS' Gravity Data for CS04 (2009)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Texas collected in 2009 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...
Airborne Gravity: NGS' Gravity Data for AN02 (2010)
National Oceanic and Atmospheric Administration, Department of Commerce — Airborne gravity data for Alaska collected in 2010 over 1 survey. This data set is part of the Gravity for the Re-definition of the American Vertical Datum (GRAV-D)...
Lovelock gravities from Born–Infeld gravity theory
Directory of Open Access Journals (Sweden)
P.K. Concha
2017-02-01
Full Text Available We present a Born–Infeld gravity theory based on generalizations of Maxwell symmetries denoted as Cm. We analyze different configuration limits allowing to recover diverse Lovelock gravity actions in six dimensions. Further, the generalization to higher even dimensions is also considered.
Lovelock gravities from Born-Infeld gravity theory
Concha, P. K.; Merino, N.; Rodríguez, E. K.
2017-02-01
We present a Born-Infeld gravity theory based on generalizations of Maxwell symmetries denoted as Cm. We analyze different configuration limits allowing to recover diverse Lovelock gravity actions in six dimensions. Further, the generalization to higher even dimensions is also considered.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
DEFF Research Database (Denmark)
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....
Heat Kernel Asymptotics of Zaremba Boundary Value Problem
Energy Technology Data Exchange (ETDEWEB)
Avramidi, Ivan G. [Department of Mathematics, New Mexico Institute of Mining and Technology (United States)], E-mail: iavramid@nmt.edu
2004-03-15
The Zaremba boundary-value problem is a boundary value problem for Laplace-type second-order partial differential operators acting on smooth sections of a vector bundle over a smooth compact Riemannian manifold with smooth boundary but with discontinuous boundary conditions, which include Dirichlet boundary conditions on one part of the boundary and Neumann boundary conditions on another part of the boundary. We study the heat kernel asymptotics of Zaremba boundary value problem. The construction of the asymptotic solution of the heat equation is described in detail and the heat kernel is computed explicitly in the leading approximation. Some of the first nontrivial coefficients of the heat kernel asymptotic expansion are computed explicitly.
Weighted Feature Gaussian Kernel SVM for Emotion Recognition.
Wei, Wei; Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.
Rational kernels for Arabic Root Extraction and Text Classification
Directory of Open Access Journals (Sweden)
Attia Nehar
2016-04-01
Full Text Available In this paper, we address the problems of Arabic Text Classification and root extraction using transducers and rational kernels. We introduce a new root extraction approach on the basis of the use of Arabic patterns (Pattern Based Stemmer. Transducers are used to model these patterns and root extraction is done without relying on any dictionary. Using transducers for extracting roots, documents are transformed into finite state transducers. This document representation allows us to use and explore rational kernels as a framework for Arabic Text Classification. Root extraction experiments are conducted on three word collections and yield 75.6% of accuracy. Classification experiments are done on the Saudi Press Agency dataset and N-gram kernels are tested with different values of N. Accuracy and F1 report 90.79% and 62.93% respectively. These results show that our approach, when compared with other approaches, is promising specially in terms of accuracy and F1.
Contravariant gravity on Poisson manifolds and Einstein gravity
International Nuclear Information System (INIS)
Kaneko, Yukio; Watamura, Satoshi; Muraki, Hisayoshi
2017-01-01
A relation between gravity on Poisson manifolds proposed in Asakawa et al (2015 Fortschr. Phys . 63 683–704) and Einstein gravity is investigated. The compatibility of the Poisson and Riemann structures defines a unique connection, the contravariant Levi-Civita connection, and leads to the idea of the contravariant gravity. The Einstein–Hilbert-type action yields an equation of motion which is written in terms of the analog of the Einstein tensor, and it includes couplings between the metric and the Poisson tensor. The study of the Weyl transformation reveals properties of those interactions. It is argued that this theory can have an equivalent description as a system of Einstein gravity coupled to matter. As an example, it is shown that the contravariant gravity on a two-dimensional Poisson manifold can be described by a real scalar field coupled to the metric in a specific manner. (paper)
Alvarez-Gaume, Luis; Kounnas, Costas; Lust, Dieter; Riotto, Antonio
2016-01-01
We discuss quadratic gravity where terms quadratic in the curvature tensor are included in the action. After reviewing the corresponding field equations, we analyze in detail the physical propagating modes in some specific backgrounds. First we confirm that the pure $R^2$ theory is indeed ghost free. Then we point out that for flat backgrounds the pure $R^2$ theory propagates only a scalar massless mode and no spin-two tensor mode. However, the latter emerges either by expanding the theory around curved backgrounds like de Sitter or anti-de Sitter, or by changing the long-distance dynamics by introducing the standard Einstein term. In both cases, the theory is modified in the infrared and a propagating graviton is recovered. Hence we recognize a subtle interplay between the UV and IR properties of higher order gravity. We also calculate the corresponding Newton's law for general quadratic curvature theories. Finally, we discuss how quadratic actions may be obtained from a fundamental theory like string- or M-...
International Nuclear Information System (INIS)
Jones, K.R.W.
1995-01-01
We develop a nonlinear quantum theory of Newtonian gravity consistent with an objective interpretation of the wavefunction. Inspired by the ideas of Schroedinger, and Bell, we seek a dimensional reduction procedure to map complex wavefunctions in configuration space onto a family of observable fields in space-time. Consideration of quasi-classical conservation laws selects the reduced one-body quantities as the basis for an explicit quasi-classical coarse-graining. These we interpret as describing the objective reality of the laboratory. Thereafter, we examine what may stand in the role of the usual Copenhagen observer to localise this quantity against macroscopic dispersion. Only a tiny change is needed, via a generically attractive self-potential. A nonlinear treatment of gravitational self-energy is thus advanced. This term sets a scale for all wavepackets. The Newtonian cosmology is thus closed, without need of an external observer. Finally, the concept of quantisation is re-interpreted as a nonlinear eigenvalue problem. To illustrate, we exhibit an elementary family of gravitationally self-bound solitary waves. Contrasting this theory with its canonically quantised analogue, we find that the given interpretation is empirically distinguishable, in principle. This result encourages deeper study of nonlinear field theories as a testable alternative to canonically quantised gravity. (author). 46 refs., 5 figs
International Nuclear Information System (INIS)
Goldman, T.; Hughes, R.J.; Nieto, M.M.
1988-01-01
No one has ever dropped a single particle of antimatter. Yet physicists assume that it would fall to the ground just like ordinary matter. Their arguments are based on two well established ideas: the equivalence principle of gravitation and the quantum-mechanical symmetry between matter and antimatter. Today this line of reasoning is being undermined by the possibility that the first of these ideas, the principle of equivalence, may not be true. Indeed all modern attempts to include gravity with the other forces of nature in a consistent, unified quantum theory predict the existence of new gravitational-strength forces, that among other things, will violate the principle. Such effects have been seen already in recent experiments. Hence, an experiment to measure the gravitational acceleration of antimatter could be of great importance to the understanding of quantum gravity. An international team has been formed to measure the graviational acceleration of antiprotons. Such an experiment would provide an unambiquous test, if new gravitational interactions do exist. 10 figs
The holographic optical micro-manipulation system based on counter-propagating beams
Czech Academy of Sciences Publication Activity Database
Čižmár, T.; Brzobohatý, Oto; Dholakia, K.; Zemánek, Pavel
2011-01-01
Roč. 8, č. 1 (2011), s. 50-56 ISSN 1612-2011 R&D Projects: GA ČR GA202/09/0348; GA MŠk(CZ) LC06007; GA MŠk OC08034; GA MŠk ED0017/01/01 Grant - others:EC(XE) COST MP0604 Institutional research plan: CEZ:AV0Z20650511 Keywords : holographic optical trapping * dual beam trap * spatial light modulator * optical rotator Subject RIV: BH - Optics, Masers, Lasers Impact factor: 9.970, year: 2011
Collisional effects in the ion Weibel instability for two counter-propagating plasma streams
Energy Technology Data Exchange (ETDEWEB)
Ryutov, D. D.; Fiuza, F.; Huntington, C. M.; Ross, J. S.; Park, H.-S. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)
2014-03-15
Experiments directed towards the study of the collisionless interaction between two counter-streaming plasma flows generated by high-power lasers are designed in such a way as to make collisions between the ions of the two flows negligibly rare. This is reached by making flow velocities v as high as possible and thereby exploiting the 1/v{sup 4} dependence of the Rutherford cross-section. At the same time, the plasma temperature of each flow may be relatively low so that collisional mean-free paths for the intra-stream particle collisions may be much smaller than the characteristic spatial scale of the unstable modes required for the shock formation. The corresponding effects are studied in this paper for the case of the ion Weibel (filamentation) instability. Dispersion relations for the case of strong intra-stream collisions are derived. It is shown that the growth-rates become significantly smaller than those stemming from a collisionless model. The underlying physics is mostly related to the increase of the electron stabilizing term. Additional effects are an increased “stiffness” of the collisional ion gas and the ion viscous dissipation. A parameter domain where collisions are important is identified.
National Research Council Canada - National Science Library
Flynn, Richard A; Shao, Bing; Chachisvilis, Mirianas; Ozkan, Mihrimah; Esener, Sadik C
2005-01-01
.... Different from the current best technique for microparticles refractive index measurement, refractometry, a bulk technique requiring changing the fluid composition of the sample, our optical trap...
Tracking the density evolution in counter-propagating shock waves using imaging X-ray scattering
Czech Academy of Sciences Publication Activity Database
Zastrau, U.; Gamboa, E. J.; Kraus, D.; Benage, J. F.; Drake, R. P.; Efthimion, P.; Falk, Kateřina; Falcone, R.W.; Fletcher, L. B.; Galtier, E.; Gauthier, M.; Granados, E.; Hastings, J.B.; Heimann, P.; Hill, K.; Keiter, P. A.; Lu, J.; MacDonald, M. J.; Montgomery, D. S.; Nagler, B.; Pablant, N.; Schropp, A.; Tobias, B.; Gericke, D.O.; Glenzer, S. H.; Lee, H. J.
2016-01-01
Roč. 109, č. 3 (2016), 1-4, č. článku 031108. ISSN 0003-6951 R&D Projects: GA MŠk LQ1606; GA MŠk EF15_008/0000162 Grant - others:ELI Beamlines(XE) CZ.02.1.01/0.0/0.0/15_008/0000162 Institutional support: RVO:68378271 Keywords : Thomson scattering * metal transition * compression * deuterium * diamond * carbon * matter Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 3.411, year: 2016
National Research Council Canada - National Science Library
Flynn, Richard A; Shao, Bing; Chachisvilis, Mirianas; Ozkan, Mihrimah; Esener, Sadik C
2005-01-01
We propose and demonstrate a novel approach to measure the size and refractive index of microparticles based on two beam optical trapping, where forward scattered light is detected to give information about the particle...
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
Broken rice kernels and the kinetics of rice hydration and texture during cooking.
Saleh, Mohammed; Meullenet, Jean-Francois
2013-05-01
During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.
Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR
International Nuclear Information System (INIS)
Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee
2011-01-01
The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application
Is there a quantum theory of gravity
International Nuclear Information System (INIS)
Strominger, A.
1984-01-01
The paper concerns attempts to construct a unitary, renormalizable quantum field theory of gravity. Renormalizability and unitarity in quantum gravity; the 1/N expansion; 1/D expansions; and quantum gravity and particle physics; are all discussed. (U.K.)
Theoretical developments for interpreting kernel spectral clustering from alternative viewpoints
Directory of Open Access Journals (Sweden)
Diego Peluffo-Ordóñez
2017-08-01
Full Text Available To perform an exploration process over complex structured data within unsupervised settings, the so-called kernel spectral clustering (KSC is one of the most recommended and appealing approaches, given its versatility and elegant formulation. In this work, we explore the relationship between (KSC and other well-known approaches, namely normalized cut clustering and kernel k-means. To do so, we first deduce a generic KSC model from a primal-dual formulation based on least-squares support-vector machines (LS-SVM. For experiments, KSC as well as other consider methods are assessed on image segmentation tasks to prove their usability.
Modelling microwave heating of discrete samples of oil palm kernels
International Nuclear Information System (INIS)
Law, M.C.; Liew, E.L.; Chang, S.L.; Chan, Y.S.; Leo, C.P.
2016-01-01
Highlights: • Microwave (MW) drying of oil palm kernels is experimentally determined and modelled. • MW heating of discrete samples of oil palm kernels (OPKs) is simulated. • OPK heating is due to contact effect, MW interference and heat transfer mechanisms. • Electric field vectors circulate within OPKs sample. • Loosely-packed arrangement improves temperature uniformity of OPKs. - Abstract: Recently, microwave (MW) pre-treatment of fresh palm fruits has showed to be environmentally friendly compared to the existing oil palm milling process as it eliminates the condensate production of palm oil mill effluent (POME) in the sterilization process. Moreover, MW-treated oil palm fruits (OPF) also possess better oil quality. In this work, the MW drying kinetic of the oil palm kernels (OPK) was determined experimentally. Microwave heating/drying of oil palm kernels was modelled and validated. The simulation results show that temperature of an OPK is not the same over the entire surface due to constructive and destructive interferences of MW irradiance. The volume-averaged temperature of an OPK is higher than its surface temperature by 3–7 °C, depending on the MW input power. This implies that point measurement of temperature reading is inadequate to determine the temperature history of the OPK during the microwave heating process. The simulation results also show that arrangement of OPKs in a MW cavity affects the kernel temperature profile. The heating of OPKs were identified to be affected by factors such as local electric field intensity due to MW absorption, refraction, interference, the contact effect between kernels and also heat transfer mechanisms. The thermal gradient patterns of OPKs change as the heating continues. The cracking of OPKs is expected to occur first in the core of the kernel and then it propagates to the kernel surface. The model indicates that drying of OPKs is a much slower process compared to its MW heating. The model is useful
Graphical analyses of connected-kernel scattering equations
International Nuclear Information System (INIS)
Picklesimer, A.
1983-01-01
Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The basic result is the application of graphical methods to the derivation of interaction-set equations. This yields a new, simplified form for some members of the class and elucidates the general structural features of the entire class
Reproducing Kernel Method for Solving Nonlinear Differential-Difference Equations
Directory of Open Access Journals (Sweden)
Reza Mokhtari
2012-01-01
Full Text Available On the basis of reproducing kernel Hilbert spaces theory, an iterative algorithm for solving some nonlinear differential-difference equations (NDDEs is presented. The analytical solution is shown in a series form in a reproducing kernel space, and the approximate solution , is constructed by truncating the series to terms. The convergence of , to the analytical solution is also proved. Results obtained by the proposed method imply that it can be considered as a simple and accurate method for solving such differential-difference problems.
Kernel and divergence techniques in high energy physics separations
Bouř, Petr; Kůs, Václav; Franc, Jiří
2017-10-01
Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.
Rebootless Linux Kernel Patching with Ksplice Uptrack at BNL
International Nuclear Information System (INIS)
Hollowell, Christopher; Pryor, James; Smith, Jason
2012-01-01
Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2,000 hosts running Scientific Linux and Red Hat Enterprise Linux. The use of this software has minimized downtime, and increased our security posture. In this paper, we provide an overview of Ksplice's rebootless kernel patch creation/insertion mechanism, and our experiences with Uptrack.
Employment of kernel methods on wind turbine power performance assessment
DEFF Research Database (Denmark)
Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.
2015-01-01
A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...
Sparse kernel orthonormalized PLS for feature extraction in large datasets
DEFF Research Database (Denmark)
Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai
2006-01-01
In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm...... is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features for genre prediction. The upshot is that the method has strong expressive power even with rather few features, is clearly outperforming the ordinary kernel PLS, and therefore is an appealing method...
Quantum Gravity in Two Dimensions
DEFF Research Database (Denmark)
Ipsen, Asger Cronberg
The topic of this thesis is quantum gravity in 1 + 1 dimensions. We will focus on two formalisms, namely Causal Dynamical Triangulations (CDT) and Dy- namical Triangulations (DT). Both theories regularize the gravity path integral as a sum over triangulations. The difference lies in the class...
Topological strings from Liouville gravity
International Nuclear Information System (INIS)
Ishibashi, N.; Li, M.
1991-01-01
We study constrained SU(2) WZW models, which realize a class of two-dimensional conformal field theories. We show that they give rise to topological gravity coupled to the topological minimal models when they are coupled to Liouville gravity. (orig.)
Newton-Cartan gravity revisited
Andringa, Roel
2016-01-01
In this research Newton's old theory of gravity is rederived using an algebraic approach known as the gauging procedure. The resulting theory is Newton's theory in the mathematical language of Einstein's General Relativity theory, in which gravity is spacetime curvature. The gauging procedure sheds
Fixed points of quantum gravity
Litim, D F
2003-01-01
Euclidean quantum gravity is studied with renormalisation group methods. Analytical results for a non-trivial ultraviolet fixed point are found for arbitrary dimensions and gauge fixing parameter in the Einstein-Hilbert truncation. Implications for quantum gravity in four dimensions are discussed.
Neutron Stars : Magnetism vs Gravity
Indian Academy of Sciences (India)
however, in the magnetosphere, electromagnetic forces dominate over gravity : Fgr = mg ~ 10-18 Newton ; Fem = e V B ~ 10-5 Newton; (for a single electron of mass m and charge e ) ; Hence, the electromagnetic force is 1013 times stronger than gravity !!
Measuring wood specific gravity, correctly
G. Bruce Williamson; Michael C. Wiemann
2010-01-01
The specific gravity (SG) of wood is a measure of the amount of structural material a tree species allocates to support and strength. In recent years, wood specific gravity, traditionally a foresterâs variable, has become the domain of ecologists exploring the universality of plant functional traits and conservationists estimating global carbon stocks. While these...
Testing the master constraint programme for loop quantum gravity: V. Interacting field theories
International Nuclear Information System (INIS)
Dittrich, B; Thiemann, T
2006-01-01
This is the fifth and final paper in our series of five in which we test the master constraint programme for solving the Hamiltonian constraint in loop quantum gravity. Here we consider interacting quantum field theories, specifically we consider the non-Abelian Gauss constraints of Einstein-Yang-Mills theory and 2 + 1 gravity. Interestingly, while Yang-Mills theory in 4D is not yet rigorously defined as an ordinary (Wightman) quantum field theory on Minkowski space, in background-independent quantum field theories such as loop quantum gravity (LQG) this might become possible by working in a new, background-independent representation. While for the Gauss constraint the master constraint can be solved explicitly, for the 2 + 1 theory we are only able to rigorously define the master constraint operator. We show that the, by other methods known, physical Hilbert is contained in the kernel of the master constraint, however, to systematically derive it by only using spectral methods is as complicated as for 3 + 1 gravity and we therefore leave the complete analysis for 3 + 1 gravity
Magnetic Fields Versus Gravity
Hensley, Kerry
2018-04-01
Deep within giant molecular clouds, hidden by dense gas and dust, stars form. Unprecedented data from the Atacama Large Millimeter/submillimeter Array (ALMA) reveal the intricate magnetic structureswoven throughout one of the most massive star-forming regions in the Milky Way.How Stars Are BornThe Horsehead Nebulasdense column of gas and dust is opaque to visible light, but this infrared image reveals the young stars hidden in the dust. [NASA/ESA/Hubble Heritage Team]Simple theory dictates that when a dense clump of molecular gas becomes massive enough that its self-gravity overwhelms the thermal pressure of the cloud, the gas collapses and forms a star. In reality, however, star formation is more complicated than a simple give and take between gravity and pressure. Thedusty molecular gas in stellar nurseries is permeated with magnetic fields, which are thought to impede the inward pull of gravity and slow the rate of star formation.How can we learn about the magnetic fields of distant objects? One way is by measuring dust polarization. An elongated dust grain will tend to align itself with its short axis parallel to the direction of the magnetic field. This systematic alignment of the dust grains along the magnetic field lines polarizes the dust grains emission perpendicular to the local magnetic field. This allows us to infer the direction of the magnetic field from the direction of polarization.Magnetic field orientations for protostars e2 and e8 derived from Submillimeter Array observations (panels a through c) and ALMA observations (panels d and e). Click to enlarge. [Adapted from Koch et al. 2018]Tracing Magnetic FieldsPatrick Koch (Academia Sinica, Taiwan) and collaborators used high-sensitivity ALMA observations of dust polarization to learn more about the magnetic field morphology of Milky Way star-forming region W51. W51 is one of the largest star-forming regions in our galaxy, home to high-mass protostars e2, e8, and North.The ALMA observations reveal