WorldWideScience

Sample records for single kernel hardness

  1. Revisiting the definition of local hardness and hardness kernel.

    Science.gov (United States)

    Polanco-Ramírez, Carlos A; Franco-Pérez, Marco; Carmona-Espíndola, Javier; Gázquez, José L; Ayers, Paul W

    2017-05-17

    An analysis of the hardness kernel and local hardness is performed to propose new definitions for these quantities that follow a similar pattern to the one that characterizes the quantities associated with softness, that is, we have derived new definitions for which the integral of the hardness kernel over the whole space of one of the variables leads to local hardness, and the integral of local hardness over the whole space leads to global hardness. A basic aspect of the present approach is that global hardness keeps its identity as the second derivative of energy with respect to the number of electrons. Local hardness thus obtained depends on the first and second derivatives of energy and electron density with respect to the number of electrons. When these derivatives are approximated by a smooth quadratic interpolation of energy, the expression for local hardness reduces to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba. However, when one combines the first directional derivatives with smooth second derivatives one finds additional terms that allow one to differentiate local hardness for electrophilic attack from the one for nucleophilic attack. Numerical results related to electrophilic attacks on substituted pyridines, substituted benzenes and substituted ethenes are presented to show the overall performance of the new definition.

  2. Flour quality and kernel hardness connection in winter wheat

    Directory of Open Access Journals (Sweden)

    Szabó B. P.

    2016-12-01

    Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.

  3. Development of nondestructive screening methods for single kernel characterization of wheat

    DEFF Research Database (Denmark)

    Nielsen, J.P.; Pedersen, D.K.; Munck, L.

    2003-01-01

    predictability. However, by applying an averaging approach, in which single seed replicate measurements are mathematically simulated, a very good NIT prediction model was achieved. This suggests that the single seed NIT spectra contain hardness information, but that a single seed hardness method with higher......The development of nondestructive screening methods for single seed protein, vitreousness, density, and hardness index has been studied for single kernels of European wheat. A single kernel procedure was applied involving, image analysis, near-infrared transmittance (NIT) spectroscopy, laboratory...

  4. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    Science.gov (United States)

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  5. Diversity, distribution of Puroindoline genes and their effect on kernel hardness in a diverse panel of Chinese wheat germplasm.

    Science.gov (United States)

    Ma, Xiaoling; Sajjad, Muhammad; Wang, Jing; Yang, Wenlong; Sun, Jiazhu; Li, Xin; Zhang, Aimin; Liu, Dongcheng

    2017-09-20

    Kernel hardness, which has great influence on the end-use properties of common wheat, is mainly controlled by Puroindoline genes, Pina and Pinb. Using EcoTILLING platform, we herein investigated the allelic variations of Pina and Pinb genes and their association with the Single Kernel Characterization System (SKCS) hardness index in a diverse panel of wheat germplasm. The kernel hardness varied from 1.4 to 102.7, displaying a wide range of hardness index. In total, six Pina and nine Pinb alleles resulting in 15 genotypes were detected in 1787 accessions. The most common alleles are the wild type Pina-D1a (90.4%) and Pina-D1b (7.4%) for Pina, and Pinb-D1b (43.6%), Pinb-D1a (41.1%) and Pinb-D1p (12.8%) for Pinb. All the genotypes have hard type kernel hardness of SKCS index (>60.0), except the wild types of Pina and Pinb combination (Pina-D1a/Pinb-D1a). The most frequent genotypes in Chinese and foreign cultivars was Pina-D1a/Pinb-D1b (46.3 and 39.0%, respectively) and in Chinese landraces was Pina-D1a/Pinb-D1a (54.2%). The frequencies of hard type accessions are increasing from 35.5% in the region IV, to 40.6 and 61.4% in the regions III and II, and then to 77.0% in the region I, while those of soft type are accordingly decreasing along with the increase of latitude. Varieties released after 2000 in Beijing, Hebei, Shandong and Henan have higher average kernel hardness index than that released before 2000. The kernel hardness in a diverse panel of Chinese wheat germplasm revealed an increasing of kernel hardness generally along with the latitude across China. The wild type Pina-D1a and Pinb-D1a, and one Pinb mutant (Pinb-D1b) are the most common alleles of six Pina and nine Pinb alleles, and a new double null genotype (Pina-D1x/Pinb-D1ah) possessed relatively high SKCS hardness index. More hard type varieties were released in recent years with different prevalence of Pin-D1 combinations in different regions. This work would benefit the understanding of the selection

  6. Genetic analysis of kernel texture (grain hardness) in a hard red spring wheat (Triticum aestivum L.) bi-parental population

    Science.gov (United States)

    Grain hardness is a very important trait in determining wheat market class and also influences milling and baking traits. At the grain Hardness (Ha) locus on chromosome 5DS, there are two primary mutations responsible for conveying a harder kernel texture among U.S. hard red spring wheats: (1) the P...

  7. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  8. Soft and hard classification by reproducing kernel Hilbert space methods.

    Science.gov (United States)

    Wahba, Grace

    2002-12-24

    Reproducing kernel Hilbert space (RKHS) methods provide a unified context for solving a wide variety of statistical modelling and function estimation problems. We consider two such problems: We are given a training set [yi, ti, i = 1, em leader, n], where yi is the response for the ith subject, and ti is a vector of attributes for this subject. The value of y(i) is a label that indicates which category it came from. For the first problem, we wish to build a model from the training set that assigns to each t in an attribute domain of interest an estimate of the probability pj(t) that a (future) subject with attribute vector t is in category j. The second problem is in some sense less ambitious; it is to build a model that assigns to each t a label, which classifies a future subject with that t into one of the categories or possibly "none of the above." The approach to the first of these two problems discussed here is a special case of what is known as penalized likelihood estimation. The approach to the second problem is known as the support vector machine. We also note some alternate but closely related approaches to the second problem. These approaches are all obtained as solutions to optimization problems in RKHS. Many other problems, in particular the solution of ill-posed inverse problems, can be obtained as solutions to optimization problems in RKHS and are mentioned in passing. We caution the reader that although a large literature exists in all of these topics, in this inaugural article we are selectively highlighting work of the author, former students, and other collaborators.

  9. Magnetic resonance imaging of single rice kernels during cooking

    NARCIS (Netherlands)

    Mohoric, A.; Vergeldt, F.J.; Gerkema, E.; Jager, de P.A.; Duynhoven, van J.P.M.; Dalen, van G.; As, van H.

    2004-01-01

    The RARE imaging method was used to monitor the cooking of single rice kernels in real time and with high spatial resolution in three dimensions. The imaging sequence is optimized for rapid acquisition of signals with short relaxation times using centered out RARE. Short scan time and high spatial

  10. Effect of kernel size and mill type on protein, milling yield, and baking quality of hard red spring wheat

    Science.gov (United States)

    Optimization of flour yield and quality is important in the milling industry. The objective of this study was to determine the effect of kernel size and mill type on flour yield and end-use quality. A hard red spring wheat composite sample was segregated, based on kernel size, into large, medium, ...

  11. A theoretical overview on single hard diffraction

    International Nuclear Information System (INIS)

    Wuesthoff, M.

    1996-01-01

    The concept of the Pomeron structure function and its application in Single Hard Diffraction at hadron colliders and in diffractive Deep Inelastic Scattering is critically reviewed. Some alternative approaches are briefly surveyed with a focus on QCD inspired models

  12. Hardness and softness reactivity kernels within the spin-polarized density-functional theory

    International Nuclear Information System (INIS)

    Chamorro, Eduardo; De Proft, Frank; Geerlings, Paul

    2005-01-01

    Generalized hardness and softness reactivity kernels are defined within a spin-polarized density-functional theory (SP-DFT) conceptual framework. These quantities constitute the basis for the global, local (i.e., r-position dependent), and nonlocal (i.e., r and r ' -position dependents) indices devoted to the treatment of both charge-transfer and spin-polarization processes in such a reactivity framework. The exact relationships between these descriptors within a SP-DFT framework are derived and the implications for chemical reactivity in such context are outlined

  13. New durum wheat with soft kernel texture: milling performance and end-use quality analysis of the Hardness locus in Triticum turgidum ssp. durum

    Science.gov (United States)

    Wheat kernel texture dictates U.S. wheat market class. Durum wheat has limited demand and culinary end-uses compared to bread wheat because of its extremely hard kernel texture which preclude conventional milling. ‘Soft Svevo’, a new durum cultivar with soft kernel texture comparable to a soft white...

  14. New durum wheat with soft kernel texture: end-use quality analysis of the Hardness locus in Triticum turgidum ssp. durum

    Science.gov (United States)

    Wheat kernel texture dictates U.S. wheat market class. Durum wheat has limited demand and culinary end-uses compared to bread wheat because of its extremely hard kernel texture which precludes conventional milling. ‘Soft Svevo’, a new durum cultivar with soft kernel texture comparable to a soft whit...

  15. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  16. Effect of Protein Molecular Weight Distribution on Kernel and Baking Characteristics and Intra-varietal Variation in Hard Spring Wheats

    Science.gov (United States)

    Specific wheat protein fractions are known to have distinct associations with wheat quality traits. Research was conducted on 10 hard spring wheat cultivars grown at two North Dakota locations to identify protein fractions that affected wheat kernel characteristics and breadmaking quality. SDS ext...

  17. Thermal neutron scattering kernels for sapphire and silicon single crystals

    International Nuclear Information System (INIS)

    Cantargi, F.; Granada, J.R.; Mayer, R.E.

    2015-01-01

    Highlights: • Thermal cross section libraries for sapphire and silicon single crystals were generated. • Debye model was used to represent the vibrational frequency spectra to feed the NJOY code. • Sapphire total cross section was measured at Centro Atómico Bariloche. • Cross section libraries were validated with experimental data available. - Abstract: Sapphire and silicon are materials usually employed as filters in facilities with thermal neutron beams. Due to the lack of the corresponding thermal cross section libraries for those materials, necessary in calculations performed in order to optimize beams for specific applications, here we present the generation of new thermal neutron scattering kernels for those materials. The Debye model was used in both cases to represent the vibrational frequency spectra required to feed the NJOY nuclear data processing system in order to produce the corresponding libraries in ENDF and ACE format. These libraries were validated with available experimental data, some from the literature and others obtained at the pulsed neutron source at Centro Atómico Bariloche

  18. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    In unsupervised classification, kernel -means clustering method has been shown to perform better than conventional -means clustering method in ... 518501, India; Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Anantapur College of Engineering, Anantapur 515002, India ...

  19. Multiple kernel learning using single stage function approximation for binary classification problems

    Science.gov (United States)

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  20. A HARDWARE SUPPORTED OPERATING SYSTEM KERNEL FOR EMBEDDED HARD REAL-TIME APPLICATIONS

    NARCIS (Netherlands)

    COLNARIC, M; HALANG, WA; TOL, RM

    1994-01-01

    The concept of the kernel, i.e. the time critical part of a real-time operating system, and its dedicated co-processor, especially tailored for embedded applications, are presented. The co-processor acts as a system controller and operates in conjunction with one or more conventional processors in

  1. Comparison of tungsten carbide and stainless steel ball bearings for grinding single maize kernels in a reciprocating grinder

    Science.gov (United States)

    Reciprocating grinders can grind single maize kernels by shaking the kernel in a vial with a ball bearing. This process results in a grind quality that is not satisfactory for many experiments. Tungesten carbide ball bearings are nearly twice as dense as steel, so we compared their grinding performa...

  2. An obstructive sleep apnea detection approach using kernel density classification based on single-lead electrocardiogram.

    Science.gov (United States)

    Chen, Lili; Zhang, Xi; Wang, Hui

    2015-05-01

    Obstructive sleep apnea (OSA) is a common sleep disorder that often remains undiagnosed, leading to an increased risk of developing cardiovascular diseases. Polysomnogram (PSG) is currently used as a golden standard for screening OSA. However, because it is time consuming, expensive and causes discomfort, alternative techniques based on a reduced set of physiological signals are proposed to solve this problem. This study proposes a convenient non-parametric kernel density-based approach for detection of OSA using single-lead electrocardiogram (ECG) recordings. Selected physiologically interpretable features are extracted from segmented RR intervals, which are obtained from ECG signals. These features are fed into the kernel density classifier to detect apnea event and bandwidths for density of each class (normal or apnea) are automatically chosen through an iterative bandwidth selection algorithm. To validate the proposed approach, RR intervals are extracted from ECG signals of 35 subjects obtained from a sleep apnea database ( http://physionet.org/cgi-bin/atm/ATM ). The results indicate that the kernel density classifier, with two features for apnea event detection, achieves a mean accuracy of 82.07 %, with mean sensitivity of 83.23 % and mean specificity of 80.24 %. Compared with other existing methods, the proposed kernel density approach achieves a comparably good performance but by using fewer features without significantly losing discriminant power, which indicates that it could be widely used for home-based screening or diagnosis of OSA.

  3. Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats

    Science.gov (United States)

    Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...

  4. Single corn kernel wide-line NMR oil analysis for breeding purpose

    Energy Technology Data Exchange (ETDEWEB)

    Wilmers, M C.C.; Rettori, C; Vargas, H; Barberis, G E [Universidade Estadual de Campinas (Brazil). Inst. de Fisica; da Silva, W J [Universidade Estadual de Campinas (Brazil). Inst. de Biologia

    1978-12-01

    The Wide-Line NMR technique was used to determine the oil content in single corn seeds. Using distinct radio frequency (RF) power, a systematic work was done in kernels with about 10% of moisture, and also in artificially dried seeds with approximated 5% of moisture. For nondried seeds NMR spectra showed clearly the presence of three resonances with different RF saturation factor. For dried seeds, the oil concentration determined by NMR was highly correlated (r = 0,997) with that determined by a gravimetric method. The highest discrepancy between the two methods was found to be about 1,3%. When relative measurements are required as in the case of single kernel for recurrent selection program, precision in the individual selected kernel will be about 2,5%. Applying this technique, a first cycle of recurrent selection using S/sub 1/ lines for low and high oil content was performed in an open pollinated variety. Gain from selection was 12.0 and 14.1% in the populations for high and low oil contents, respectively.

  5. Hard single diffractive jet production at D0

    International Nuclear Information System (INIS)

    Abachi, S.; Abbott, B.; Abolins, M.

    1996-08-01

    Preliminary results from the D null experiment on jet production with forward rapidity gaps in p anti p collisions are presented. A class of dijet events with a forward rapidity gap is observed at center-of-mass energies √s = 1800 GeV and 630 GeV. The number of events with rapidity gaps at both center-of-mass energies is significantly greater than the expectation from multiplicity fluctuations and is consistent with a hard single diffractive process. A small class of events with two forward gaps and central dijets is also observed at 1800 GeV. This topology is consistent with hard double pomeron exchange

  6. Vis-NIR hyperspectral imaging and multivariate analysis for prediction of the moisture content and hardness of Pistachio kernels roasted in different conditions

    Directory of Open Access Journals (Sweden)

    T Mohammadi Moghaddam

    2015-09-01

    Full Text Available Introduction: Pistachio nut is one of the most delicious and nutritious nuts in the world and it is being used as a salted and roasted product or as an ingredient in snacks, ice cream, desserts, etc. (Maghsudi, 2010; Kashaninejad et al. 2006. Roasting is one of the most important food processes which provides useful attributes to the product. One of the objectives of nut roasting is to alter and significantly enhance the flavor, texture, color and appearance of the product (Ozdemir, 2001. In recent years, spectral imaging techniques (i.e. hyperspectral and multispectral imaging have emerged as powerful tools for safequality inspection of various agricultural commodities (Gowen et al., 2007. The objectives of this study were to apply reflectance hyperspectral imaging for non-destructive determination of moisture content and hardness of pistachio kernels roasted in different conditions. Materials and methods: Dried O’hadi pistachio nuts were supplied from a local market in Mashhad. Pistachio nuts were soaked in 5L of 20% salt solution for 20min (Goktas Seyhan, 2003. For roasting process, three temperatures (90, 120 and 150°C, three times (20, 35 and 50 min and three air velocities (0.5, 1.5 and 2.5 m s-1 were applied. The moisture content of pistachio kernels was measured in triplicate using oven drying (3 gr samples at 105 °C for 12 hours. Uniaxial compression test by a 35mm diameter plastic cylinder, was made on the pistachio kernels, which were mounted on a platform. Samples were compressed at a depth of 2mm and speed of 30 mm min-1. A hyperspectral imaging system in the Vis-NIR range (400-1000 nm was employed. The spectral pre-processing techniques: first derivative and second derivative, median filter, Savitzkye-Golay, wavelet, multiplicative scatter correction (MSC and standard normal variate transformation (SNV were used. To make models at PLSR and ANN methods, ParLeS software and Matlab R2009a were used, respectively. The coefficient

  7. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    Science.gov (United States)

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  8. Reply to the 'Comment on "Revisiting the definition of local hardness and hardness kernel"' by C. Morell, F. Guégan, W. Lamine, and H. Chermette, Phys. Chem. Chem. Phys., 2018, 20, DOI.

    Science.gov (United States)

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos A; Gázquez, José L; Ayers, Paul W

    2018-03-28

    This reply complements the comment of Guégan et al. about our recent work on the revision of the local hardness and the hardness kernel concepts. Guegan et al. analyze our work using a Taylor series expansion of the energy as a functional of the electron density, to show that our procedure opens a new way to define local descriptors. In this contribution we show that the strategy we followed for the local hardness and the hardness kernel is even more general, and that it can be used to derive from a global response function its corresponding local and non-local counterparts by: (1) requiring that the integral over one of the two variables that characterizes the non-local function leads to the local function, and that the integral over the local function leads to the global response index, and (2) assuming that the global and local functions are related through the electronic density, by making use of the chain rule for functional derivatives.

  9. Single-kernel analysis of fumonisins and other fungal metabolites in maize from South African subsistence farmers.

    Science.gov (United States)

    Mogensen, J M; Sørensen, S M; Sulyok, M; van der Westhuizen, L; Shephard, G S; Frisvad, J C; Thrane, U; Krska, R; Nielsen, K F

    2011-12-01

    Fumonisins are important Fusarium mycotoxins mainly found in maize and derived products. This study analysed maize from five subsistence farmers in the former Transkei region of South Africa. Farmers had sorted kernels into good and mouldy quality. A total of 400 kernels from 10 batches were analysed; of these 100 were visually characterised as uninfected and 300 as infected. Of the 400 kernels, 15% were contaminated with 1.84-1428 mg kg(-1) fumonisins, and 4% (n=15) had a fumonisin content above 100 mg kg(-1). None of the visually uninfected maize had detectable amounts of fumonisins. The total fumonisin concentration was 0.28-1.1 mg kg(-1) for good-quality batches and 0.03-6.2 mg kg(-1) for mouldy-quality batches. The high fumonisin content in the batches was apparently caused by a small number (4%) of highly contaminated kernels, and removal of these reduced the average fumonisin content by 71%. Of the 400 kernels, 80 were screened for 186 microbial metabolites by liquid chromatography-tandem mass spectrometry, detecting 17 other fungal metabolites, including fusaric acid, equisetin, fusaproliferin, beauvericin, cyclosporins, agroclavine, chanoclavine, rugulosin and emodin. Fusaric acid in samples without fumonisins indicated the possibility of using non-toxinogenic Fusaria as biocontrol agents to reduce fumonisin exposure, as done for Aspergillus flavus. This is the first report of mycotoxin profiling in single naturally infected maize kernels. © 2011 Taylor & Francis

  10. Emotion Recognition from Single-Trial EEG Based on Kernel Fisher’s Emotion Pattern and Imbalanced Quasiconformal Kernel Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2014-07-01

    Full Text Available Electroencephalogram-based emotion recognition (EEG-ER has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI. However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher’s discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher’s emotion pattern (KFEP, and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68% and arousal (84.79% among all testing methods.

  11. Kernel PLS Estimation of Single-trial Event-related Potentials

    Science.gov (United States)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  12. SCAP-82, Single Scattering, Albedo Scattering, Point-Kernel Analysis in Complex Geometry

    International Nuclear Information System (INIS)

    Disney, R.K.; Vogtman, S.E.

    1987-01-01

    1 - Description of problem or function: SCAP solves for radiation transport in complex geometries using the single or albedo scatter point kernel method. The program is designed to calculate the neutron or gamma ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user specified discrete scattering volume. Geometry is describable by zones bounded by intersecting quadratic surfaces within an arbitrary maximum number of boundary surfaces per zone. Anisotropic point sources are describable as pointwise energy dependent distributions of polar angles on a meridian; isotropic point sources may also be specified. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a build- up factor approximation to account for multiple scatter on the scat- ter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as a distribution of isotropic point sources, with un-collided line-of-sight attenuation and buildup calculated between each source point and the detector point. 2 - Method of solution: A point kernel method using an anisotropic or isotropic point source representation is used, line-of-sight material attenuation and inverse square spatial attenuation between the source point and scatter points and the scatter points and detector point is employed. A direct summation of individual point source results is obtained. 3 - Restrictions on the complexity of the problem: - The SCAP program is written in complete flexible dimensioning so that no restrictions are imposed on the number of energy groups or geometric zones. The geometric zone description is restricted to zones defined by boundary surfaces defined by the general quadratic equation or one of its degenerate forms. The only restriction in the program is that the total

  13. The effect of STDP temporal kernel structure on the learning dynamics of single excitatory and inhibitory synapses.

    Directory of Open Access Journals (Sweden)

    Yotam Luz

    Full Text Available Spike-Timing Dependent Plasticity (STDP is characterized by a wide range of temporal kernels. However, much of the theoretical work has focused on a specific kernel - the "temporally asymmetric Hebbian" learning rules. Previous studies linked excitatory STDP to positive feedback that can account for the emergence of response selectivity. Inhibitory plasticity was associated with negative feedback that can balance the excitatory and inhibitory inputs. Here we study the possible computational role of the temporal structure of the STDP. We represent the STDP as a superposition of two processes: potentiation and depression. This allows us to model a wide range of experimentally observed STDP kernels, from Hebbian to anti-Hebbian, by varying a single parameter. We investigate STDP dynamics of a single excitatory or inhibitory synapse in purely feed-forward architecture. We derive a mean-field-Fokker-Planck dynamics for the synaptic weight and analyze the effect of STDP structure on the fixed points of the mean field dynamics. We find a phase transition along the Hebbian to anti-Hebbian parameter from a phase that is characterized by a unimodal distribution of the synaptic weight, in which the STDP dynamics is governed by negative feedback, to a phase with positive feedback characterized by a bimodal distribution. The critical point of this transition depends on general properties of the STDP dynamics and not on the fine details. Namely, the dynamics is affected by the pre-post correlations only via a single number that quantifies its overlap with the STDP kernel. We find that by manipulating the STDP temporal kernel, negative feedback can be induced in excitatory synapses and positive feedback in inhibitory. Moreover, there is an exact symmetry between inhibitory and excitatory plasticity, i.e., for every STDP rule of inhibitory synapse there exists an STDP rule for excitatory synapse, such that their dynamics is identical.

  14. Evaluation of the Single-precision Floatingpoint Vector Add Kernel Using the Intel FPGA SDK for OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zheming [Argonne National Lab. (ANL), Argonne, IL (United States); Yoshii, Kazutomo [Argonne National Lab. (ANL), Argonne, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-04-20

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication and kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.

  15. Exploring abiotic stress on asynchronous protein metabolism in single kernels of wheat studied by NMR spectroscopy and chemometrics

    DEFF Research Database (Denmark)

    Winning, H.; Viereck, N.; Wollenweber, B.

    2009-01-01

    at the vegetative growth stage had little effect on the parameters investigated. For the first time, H-1 HR-MAS NMR spectra of grains taken during grain-filling were analysed by an advanced multiway model. In addition to the results from the chemical protein analysis and the H-1 HR-MAS NMR spectra of single kernels...... was to examine the implications of different drought treatments on the protein fractions in grains of winter wheat using H-1 nuclear magnetic resonance spectroscopy followed by chemometric analysis. Triticum aestivum L. cv. Vinjett was studied in a semi-field experiment and subjected to drought episodes either...... at terminal spikelet, during grain-filling or at both stages. Principal component trajectories of the total protein content and the protein fractions of flour as well as the H-1 NMR spectra of single wheat kernels, wheat flour, and wheat methanol extracts were analysed to elucidate the metabolic development...

  16. Dimensional feature weighting utilizing multiple kernel learning for single-channel talker location discrimination using the acoustic transfer function.

    Science.gov (United States)

    Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo

    2013-02-01

    This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.

  17. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  18. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints....... This paper presents a multi-core architecture incorporating a hardware kernel on FPGAs, intended for high performance applications in control engineering domain. First, the hardware kernel is investigated on the basis of a component-based real-time kernel HARTEX (Hard Real-Time Executive for Control Systems...

  19. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  20. Hard-hard coupling assisted anomalous magnetoresistance effect in amine-ended single-molecule magnetic junction

    Science.gov (United States)

    Tang, Y.-H.; Lin, C.-J.; Chiang, K.-R.

    2017-06-01

    We proposed a single-molecule magnetic junction (SMMJ), composed of a dissociated amine-ended benzene sandwiched between two Co tip-like nanowires. To better simulate the break junction technique for real SMMJs, the first-principles calculation associated with the hard-hard coupling between a amine-linker and Co tip-atom is carried out for SMMJs with mechanical strain and under an external bias. We predict an anomalous magnetoresistance (MR) effect, including strain-induced sign reversal and bias-induced enhancement of the MR value, which is in sharp contrast to the normal MR effect in conventional magnetic tunnel junctions. The underlying mechanism is the interplay between four spin-polarized currents in parallel and anti-parallel magnetic configurations, originated from the pronounced spin-up transmission feature in the parallel case and spiky transmission peaks in other three spin-polarized channels. These intriguing findings may open a new arena in which magnetotransport and hard-hard coupling are closely coupled in SMMJs and can be dually controlled either via mechanical strain or by an external bias.

  1. Uncooled Radiation Hard SiC Schottky VUV Detectors Capable of Single Photon Sensing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project seeks to design, fabricate, characterize and commercialize very large area, uncooled and radiative hard 4H-SiC VUV detectors capable of near single...

  2. 50 CFR Figure 13 to Part 223 - Single Grid Hard TED Escape Opening

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Single Grid Hard TED Escape Opening 13 Figure 13 to Part 223 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE MARINE MAMMALS THREATENED MARINE AND ANADROMOUS SPECIES Pt. 223, Fig. 13 Figure 13 to Part 223—Singl...

  3. A scalable single-chip multi-processor architecture with on-chip RTOS kernel

    NARCIS (Netherlands)

    Theelen, B.D.; Verschueren, A.C.; Reyes Suarez, V.V.; Stevens, M.P.J.; Nunez, A.

    2003-01-01

    Now that system-on-chip technology is emerging, single-chip multi-processors are becoming feasible. A key problem of designing such systems is the complexity of their on-chip interconnects and memory architecture. It is furthermore unclear at what level software should be integrated. An example of a

  4. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  5. Hard photon emission from high energy electrons and positrons in single crystals

    International Nuclear Information System (INIS)

    Bajer, V.N.; Katkov, V.M.; Strakhovenko, V.M.

    1991-01-01

    A radiation of electrons and positrons in single crystals in coherent bremsstrahlung (CBS) region has been considered for the case when CBS has the most hard spectrum. Under this condition a particle moves near a crystalline plane (in fcc(d) crystal for axis (001) this is the plane (110)) and influence of the continuous plane potential should be taken into account. This potential gives additional contribution in soft part of the spectrum and affects on hard photon emission. Observation of this phenomena at high energy is discussed. 14 refs.; 5 figs.; 1 tab

  6. Composite population kernels in ytterbium-buffer collisions studied by means of laser-saturated absorption

    International Nuclear Information System (INIS)

    Zhu, X.

    1986-01-01

    We present a systematic study of composite population kernels for 174 Yb collisions with He, Ar, and Xe buffer gases, using laser-saturation spectroscopy. 174 Yb is chosen as the active species because of the simple structure of its 1 S 0 - 3 P 1 resonance transition (lambda = 556 nm). Elastic collisions are modeled by means of a composite collision kernel, an expression of which is explicitly derived based on arguments of a hard-sphere potential and two-category collisions. The corresponding coupled population-rate equations are solved by iteration to obtain an expression for the saturated-absorption line shape. This expression is fit to the data to obtain information about the composite kernel, along with reasonable values for other parameters. The results confirm that a composite kernel is more general and realistic than a single-component kernel, and the generality in principle and the practical necessity of the former are discussed

  7. Hardness of high-pressure high-temperature treated single-walled carbon nanotubes

    International Nuclear Information System (INIS)

    Kawasaki, S.; Nojima, Y.; Yokomae, T.; Okino, F.; Touhara, H.

    2007-01-01

    We have performed high-pressure high-temperature (HPHT) treatments of high quality single-walled carbon nanotubes (SWCNTs) over a wide pressure-temperature range up to 13 GPa-873 K and have investigated the hardness of the HPHT-treated SWCNTs using a nanoindentation technique. It was found that the hardness of the SWCNTs treated at pressures greater than 11 GPa and at temperatures higher than 773 K is about 10 times greater than that of the SWCNTs treated at low temperature. It was also found that the hardness change of the SWCNTs is related to the structural change by the HPHT treatments which was based on synchrotron X-ray diffraction measurements

  8. Single-kernel analysis of fumonisins and other fungal metabolites in maize from South African subsistence farmers

    DEFF Research Database (Denmark)

    Mogensen, Jesper Mølgaard; Sørensen, S.M.; Sulyok, M.

    2011-01-01

    Fumonisins are important Fusarium mycotoxins mainly found in maize and derived products. This study analysed maize from five subsistence farmers in the former Transkei region of South Africa. Farmers had sorted kernels into good and mouldy quality. A total of 400 kernels from 10 batches were...... analysed; of these 100 were visually characterised as uninfected and 300 as infected. Of the 400 kernels, 15% were contaminated with 1.84-1428 mg kg(-1) fumonisins, and 4% (n = 15) had a fumonisin content above 100 mg kg(-1). None of the visually uninfected maize had detectable amounts of fumonisins....... The total fumonisin concentration was 0.28-1.1 mg kg(-1) for good-quality batches and 0.03-6.2 mg kg(-1) for mouldy-quality batches. The high fumonisin content in the batches was apparently caused by a small number (4%) of highly contaminated kernels, and removal of these reduced the average fumonisin...

  9. Vickers Hardness of Diamond and cBN Single Crystals: AFM Approach

    Directory of Open Access Journals (Sweden)

    Sergey Dub

    2017-12-01

    Full Text Available Atomic force microscopy in different operation modes (topography, derivative topography, and phase contrast was used to obtain 3D images of Vickers indents on the surface of diamond and cBN single crystals with high spatial resolution. Confocal Raman spectroscopy and Kelvin probe force microscopy were used to study the structure of the material in the indents. It was found that Vickers indents in diamond has no sharp and clear borders. However, the phase contrast operation mode of the AFM reveals a new viscoelastic phase in the indent in diamond. Raman spectroscopy and Kelvin probe force microscopy revealed that the new phase in the indent is disordered graphite, which was formed due to the pressure-induced phase transformation in the diamond during the hardness test. The projected contact area of the graphite layer in the indent allows us to measure the Vickers hardness of type-Ib synthetic diamond. In contrast to diamond, very high plasticity was observed for 0.5 N load indents on the (001 cBN single crystal face. Radial and ring cracks were absent, the shape of the indents was close to a square, and there were linear details in the indent, which looked like slip lines. The Vickers hardness of the (111 synthetic diamond and (111 and (001 cBN single crystals were determined using the AFM images and with account for the elastic deformation of the diamond Vickers indenter during the tests.

  10. Modeling of the endosperm crush response profile of hard red spring wheat using a single kernel characterization system

    Science.gov (United States)

    When a wheat endosperm is crushed the force profile shows viscoelastic response and the modulus of elasticity is an important parameter that might have substantial influence on wheat milling. An experiment was performed to model endosperm crush response profile (ECRP) and to determine the modulus o...

  11. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  12. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  13. Radiation hardness of a single crystal CVD diamond detector for MeV energy protons

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Yuki, E-mail: y.sato@riken.jp [The Institute of Physical and Chemical Research (RIKEN), 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Shimaoka, Takehiro; Kaneko, Junichi H. [Graduate School of Engineering, Hokkaido University, N13, W8, Sapporo 060-8628 (Japan); Murakami, Hiroyuki [The Institute of Physical and Chemical Research (RIKEN), 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Isobe, Mitsutaka; Osakabe, Masaki [National Institute for Fusion Science, 322-6, Oroshi-cho Toki-city, Gifu 509-5292 (Japan); Tsubota, Masakatsu [Graduate School of Engineering, Hokkaido University, N13, W8, Sapporo 060-8628 (Japan); Ochiai, Kentaro [Fusion Research and Development Directorate, Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki 319-1195 (Japan); Chayahara, Akiyoshi; Umezawa, Hitoshi; Shikata, Shinichi [National Institute of Advanced Industrial Science and Technology (AIST), 1-8-31 Midorigaoka, Ikeda, Osaka 563-8577 (Japan)

    2015-06-01

    We have fabricated a particle detector using single crystal diamond grown by chemical vapor deposition. The irradiation dose dependence of the output pulse height from the diamond detector was measured using 3 MeV protons. The pulse height of the output signals from the diamond detector decreases as the amount of irradiation increases at count rates of 1.6–8.9 kcps because of polarization effects inside the diamond crystal. The polarization effect can be cancelled by applying a reverse bias voltage, which restores the pulse heights. Additionally, the radiation hardness performance for MeV energy protons was compared with that of a silicon surface barrier detector.

  14. Single-Kernel FT-NIR Spectroscopy for Detecting Supersweet Corn (Zea mays L. Saccharata Sturt Seed Viability with Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Guangjun Qiu

    2018-03-01

    Full Text Available The viability and vigor of crop seeds are crucial indicators for evaluating seed quality, and high-quality seeds can increase agricultural yield. The conventional methods for assessing seed viability are time consuming, destructive, and labor intensive. Therefore, a rapid and nondestructive technique for testing seed viability has great potential benefits for agriculture. In this study, single-kernel Fourier transform near-infrared (FT-NIR spectroscopy with a wavelength range of 1000–2500 nm was used to distinguish viable and nonviable supersweet corn seeds. Various preprocessing algorithms coupled with partial least squares discriminant analysis (PLS-DA were implemented to test the performance of classification models. The FT-NIR spectroscopy technique successfully differentiated viable seeds from seeds that were nonviable due to overheating or artificial aging. Correct classification rates for both heat-damaged kernels and artificially aged kernels reached 98.0%. The comprehensive model could also attain an accuracy of 98.7% when combining heat-damaged samples and artificially aged samples into one category. Overall, the FT-NIR technique with multivariate data analysis methods showed great potential capacity in rapidly and nondestructively detecting seed viability in supersweet corn.

  15. Single-Kernel FT-NIR Spectroscopy for Detecting Supersweet Corn (Zea mays L. Saccharata Sturt) Seed Viability with Multivariate Data Analysis.

    Science.gov (United States)

    Qiu, Guangjun; Lü, Enli; Lu, Huazhong; Xu, Sai; Zeng, Fanguo; Shui, Qin

    2018-03-28

    The viability and vigor of crop seeds are crucial indicators for evaluating seed quality, and high-quality seeds can increase agricultural yield. The conventional methods for assessing seed viability are time consuming, destructive, and labor intensive. Therefore, a rapid and nondestructive technique for testing seed viability has great potential benefits for agriculture. In this study, single-kernel Fourier transform near-infrared (FT-NIR) spectroscopy with a wavelength range of 1000-2500 nm was used to distinguish viable and nonviable supersweet corn seeds. Various preprocessing algorithms coupled with partial least squares discriminant analysis (PLS-DA) were implemented to test the performance of classification models. The FT-NIR spectroscopy technique successfully differentiated viable seeds from seeds that were nonviable due to overheating or artificial aging. Correct classification rates for both heat-damaged kernels and artificially aged kernels reached 98.0%. The comprehensive model could also attain an accuracy of 98.7% when combining heat-damaged samples and artificially aged samples into one category. Overall, the FT-NIR technique with multivariate data analysis methods showed great potential capacity in rapidly and nondestructively detecting seed viability in supersweet corn.

  16. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints. This pa......Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints......). Second, a heterogeneous multi-core architecture is investigated, focusing on its performance in relation to hard real-time constraints and predictable behavior. Third, the hardware implementation of HARTEX is designated to support the heterogeneous multi-core architecture. This hardware kernel has...... several advantages over a similar kernel implemented in software: higher-speed processing capability, parallel computation, and separation between the kernel itself and the applications being run. A microbenchmark has been used to compare the hardware kernel with the software kernel, and compare...

  17. Fabrication of triangular nanobeam waveguide networks in bulk diamond using single-crystal silicon hard masks

    International Nuclear Information System (INIS)

    Bayn, I.; Mouradian, S.; Li, L.; Goldstein, J. A.; Schröder, T.; Zheng, J.; Chen, E. H.; Gaathon, O.; Englund, Dirk; Lu, M.; Stein, A.; Ruggiero, C. A.; Salzman, J.; Kalish, R.

    2014-01-01

    A scalable approach for integrated photonic networks in single-crystal diamond using triangular etching of bulk samples is presented. We describe designs of high quality factor (Q = 2.51 × 10 6 ) photonic crystal cavities with low mode volume (V m  = 1.062 × (λ/n) 3 ), which are connected via waveguides supported by suspension structures with predicted transmission loss of only 0.05 dB. We demonstrate the fabrication of these structures using transferred single-crystal silicon hard masks and angular dry etching, yielding photonic crystal cavities in the visible spectrum with measured quality factors in excess of Q = 3 × 10 3

  18. Radiation-Hard Complementary Integrated Circuits Based on Semiconducting Single-Walled Carbon Nanotubes.

    Science.gov (United States)

    McMorrow, Julian J; Cress, Cory D; Gaviria Rojas, William A; Geier, Michael L; Marks, Tobin J; Hersam, Mark C

    2017-03-28

    Increasingly complex demonstrations of integrated circuit elements based on semiconducting single-walled carbon nanotubes (SWCNTs) mark the maturation of this technology for use in next-generation electronics. In particular, organic materials have recently been leveraged as dopant and encapsulation layers to enable stable SWCNT-based rail-to-rail, low-power complementary metal-oxide-semiconductor (CMOS) logic circuits. To explore the limits of this technology in extreme environments, here we study total ionizing dose (TID) effects in enhancement-mode SWCNT-CMOS inverters that employ organic doping and encapsulation layers. Details of the evolution of the device transport properties are revealed by in situ and in operando measurements, identifying n-type transistors as the more TID-sensitive component of the CMOS system with over an order of magnitude larger degradation of the static power dissipation. To further improve device stability, radiation-hardening approaches are explored, resulting in the observation that SWNCT-CMOS circuits are TID-hard under dynamic bias operation. Overall, this work reveals conditions under which SWCNTs can be employed for radiation-hard integrated circuits, thus presenting significant potential for next-generation satellite and space applications.

  19. Effect of intracrystalline water on micro-Vickers hardness in tetragonal hen egg-white lysozyme single crystals

    International Nuclear Information System (INIS)

    Koizumi, H; Kawamoto, H; Tachibana, M; Kojima, K

    2008-01-01

    Mechanical properties of high quality tetragonal hen egg-white lysozyme single crystals which are one type of protein crystal were investigated by the indentation method. The indentation marks were clearly observed on the crystal surface and no elastic recovery of them occurred. The value of the micro-Vickers hardness in the wet condition was estimated to be about 20 MPa at room temperature. The hardness greatly depended on the amount of intracrystalline water (mobile water) contained in the crystals. The hardness increased with increasing evaporation time to air at room temperature. It reached the maximum at about 260 MPa, which is 13 times as much as that in the wet condition. The origin of such a change in hardness was explained in terms of the dislocation mechanisms in lysozyme single crystals

  20. Communication: The electronic structure of matter probed with a single femtosecond hard x-ray pulse

    Directory of Open Access Journals (Sweden)

    J. Szlachetko

    2014-03-01

    Full Text Available Physical, biological, and chemical transformations are initiated by changes in the electronic configuration of the species involved. These electronic changes occur on the timescales of attoseconds (10−18 s to femtoseconds (10−15 s and drive all subsequent electronic reorganization as the system moves to a new equilibrium or quasi-equilibrium state. The ability to detect the dynamics of these electronic changes is crucial for understanding the potential energy surfaces upon which chemical and biological reactions take place. Here, we report on the determination of the electronic structure of matter using a single self-seeded femtosecond x-ray pulse from the Linac Coherent Light Source hard x-ray free electron laser. By measuring the high energy resolution off-resonant spectrum (HEROS, we were able to obtain information about the electronic density of states with a single femtosecond x-ray pulse. We show that the unoccupied electronic states of the scattering atom may be determined on a shot-to-shot basis and that the measured spectral shape is independent of the large intensity fluctuations of the incoming x-ray beam. Moreover, we demonstrate the chemical sensitivity and single-shot capability and limitations of HEROS, which enables the technique to track the electronic structural dynamics in matter on femtosecond time scales, making it an ideal probe technique for time-resolved X-ray experiments.

  1. Single event effect hardness for the front-end ASICs in the DAMPE satellite BGO calorimeter

    Science.gov (United States)

    Gao, Shan-Shan; Jiang, Di; Feng, Chang-Qing; Xi, Kai; Liu, Shu-Bin; An, Qi

    2016-01-01

    The Dark Matter Particle Explorer (DAMPE) is a Chinese scientific satellite designed for cosmic ray studies with a primary scientific goal of indirect detection of dark matter particles. As a crucial sub-detector, the BGO calorimeter measures the energy spectrum of cosmic rays in the energy range from 5 GeV to 10 TeV. In order to implement high-density front-end electronics (FEE) with the ability to measure 1848 signals from 616 photomultiplier tubes on the strictly constrained satellite platform, two kinds of 32-channel front-end ASICs, VA160 and VATA160, are customized. However, a space mission period of more than 3 years makes single event effects (SEEs) become threats to reliability. In order to evaluate SEE sensitivities of these chips and verify the effectiveness of mitigation methods, a series of laser-induced and heavy ion-induced SEE tests were performed. Benefiting from the single event latch-up (SEL) protection circuit for power supply, the triple module redundancy (TMR) technology for the configuration registers and the optimized sequential design for the data acquisition process, 52 VA160 chips and 32 VATA160 chips have been applied in the flight model of the BGO calorimeter with radiation hardness assurance. Supported by Strategic Priority Research Program on Space Science of the Chinese Academy of Sciences (XDA04040202-4) and Fundamental Research Funds for the Central Universities (WK2030040048)

  2. Transverse-target single-spin azimuthal asymmetry in hard exclusive electroproduction of single pions at HERMES

    Energy Technology Data Exchange (ETDEWEB)

    Hristova, I.

    2007-12-15

    We present the analysis of data taken in the years 2002-2004 with the 27.56 GeV positron beam of the HERA storage ring at DESY and the internal transversely polarised hydrogen fixed target of the HERMES experiment. Events with a scattered positron and a produced pion are selected. Exclusive production of single pions, e{sup +}p{yields}e{sup +'}n{pi}{sup +}, is ensured by requiring the missing mass in the event to be equal to the mass of the neutron, which is not detected. The cross section for this process depends on the Bjorken scaling variable, the four-momentum transfer, and the transverse four-momentum transfer, whose average values for our sample are left angle x right angle =0.12, left angle Q{sup 2} right angle =2.3 GeV{sup 2}, left angle t' right angle =-0.18 GeV{sup 2}, respectively, and two azimuthal angles: the angle {phi} between the scattering and production planes (their common line contains the virtual photon), and the angle {phi}{sub S} between the scattering plane and the target polarisation vector. The hard scattering is selected by requiring Q{sup 2}>1 GeV{sup 2}. The asymmetry, also called transverse-target single-spin azimuthal asymmetry, is defined as the ratio of the difference to the sum of the cross sections for positive and negative target polarisation. It is characterised by six azimuthal sine modulations, whose amplitudes can vary from -1 to 1. We measure the asymmetry from a sample of 2093 events with a signal-to-background ratio of 1: 1. At average kinematics, the values of the amplitudes are found to be small or consistent with zero, except for the amplitude A{sup sin{phi}{sub SUT,meas}}=0.38{+-}0.06(stat){sup +0.12}{sub -0.06}(syst). The amplitude of main interest for comparison with theory, A{sup sin({phi}-{phi}{sub S})}{sub UT,meas}=0.09{+-}0.05(stat){sup +0.10}{sub -0.03}(syst), after correction for the background contribution becomes A{sup sin({phi}-{phi}{sub S})}{sub UT,bg.cor}=0.22 {+-}0.13(stat){sup +0.10}{sub -0

  3. Transverse-target single-spin azimuthal asymmetry in hard exclusive electroproduction of single pions at HERMES

    International Nuclear Information System (INIS)

    Hristova, I.

    2007-12-01

    We present the analysis of data taken in the years 2002-2004 with the 27.56 GeV positron beam of the HERA storage ring at DESY and the internal transversely polarised hydrogen fixed target of the HERMES experiment. Events with a scattered positron and a produced pion are selected. Exclusive production of single pions, e + p→e +' nπ + , is ensured by requiring the missing mass in the event to be equal to the mass of the neutron, which is not detected. The cross section for this process depends on the Bjorken scaling variable, the four-momentum transfer, and the transverse four-momentum transfer, whose average values for our sample are left angle x right angle =0.12, left angle Q 2 right angle =2.3 GeV 2 , left angle t' right angle =-0.18 GeV 2 , respectively, and two azimuthal angles: the angle φ between the scattering and production planes (their common line contains the virtual photon), and the angle φ S between the scattering plane and the target polarisation vector. The hard scattering is selected by requiring Q 2 >1 GeV 2 . The asymmetry, also called transverse-target single-spin azimuthal asymmetry, is defined as the ratio of the difference to the sum of the cross sections for positive and negative target polarisation. It is characterised by six azimuthal sine modulations, whose amplitudes can vary from -1 to 1. We measure the asymmetry from a sample of 2093 events with a signal-to-background ratio of 1: 1. At average kinematics, the values of the amplitudes are found to be small or consistent with zero, except for the amplitude A sinφ S UT,meas =0.38±0.06(stat) +0.12 -0.06 (syst). The amplitude of main interest for comparison with theory, A sin(φ-φ S ) UT,meas =0.09±0.05(stat) +0.10 -0.03 (syst), after correction for the background contribution becomes A sin(φ-φ S ) UT,bg.cor =0.22 ±0.13(stat) +0.10 -0.04 (syst). As a function of t', the measured values of this amplitude increase as √(-t') and at larger vertical stroke t' vertical stroke the

  4. Single-Event Gate Rupture in Power MOSFETs: A New Radiation Hardness Assurance Approach

    Science.gov (United States)

    Lauenstein, Jean-Marie

    2011-01-01

    Almost every space mission uses vertical power metal-semiconductor-oxide field-effect transistors (MOSFETs) in its power-supply circuitry. These devices can fail catastrophically due to single-event gate rupture (SEGR) when exposed to energetic heavy ions. To reduce SEGR failure risk, the off-state operating voltages of the devices are derated based upon radiation tests at heavy-ion accelerator facilities. Testing is very expensive. Even so, data from these tests provide only a limited guide to on-orbit performance. In this work, a device simulation-based method is developed to measure the response to strikes from heavy ions unavailable at accelerator facilities but posing potential risk on orbit. This work is the first to show that the present derating factor, which was established from non-radiation reliability concerns, is appropriate to reduce on-orbit SEGR failure risk when applied to data acquired from ions with appropriate penetration range. A second important outcome of this study is the demonstration of the capability and usefulness of this simulation technique for augmenting SEGR data from accelerator beam facilities. The mechanisms of SEGR are two-fold: the gate oxide is weakened by the passage of the ion through it, and the charge ionized along the ion track in the silicon transiently increases the oxide electric field. Most hardness assurance methodologies consider the latter mechanism only. This work demonstrates through experiment and simulation that the gate oxide response should not be neglected. In addition, the premise that the temporary weakening of the oxide due to the ion interaction with it, as opposed to due to the transient oxide field generated from within the silicon, is validated. Based upon these findings, a new approach to radiation hardness assurance for SEGR in power MOSFETs is defined to reduce SEGR risk in space flight projects. Finally, the potential impact of accumulated dose over the course of a space mission on SEGR

  5. HS-SPME-GC-MS/MS Method for the Rapid and Sensitive Quantitation of 2-Acetyl-1-pyrroline in Single Rice Kernels.

    Science.gov (United States)

    Hopfer, Helene; Jodari, Farman; Negre-Zakharov, Florence; Wylie, Phillip L; Ebeler, Susan E

    2016-05-25

    Demand for aromatic rice varieties (e.g., Basmati) is increasing in the US. Aromatic varieties typically have elevated levels of the aroma compound 2-acetyl-1-pyrroline (2AP). Due to its very low aroma threshold, analysis of 2AP provides a useful screening tool for rice breeders. Methods for 2AP analysis in rice should quantitate 2AP at or below sensory threshold level, avoid artifactual 2AP generation, and be able to analyze single rice kernels in cases where only small sample quantities are available (e.g., breeding trials). We combined headspace solid phase microextraction with gas chromatography tandem mass spectrometry (HS-SPME-GC-MS/MS) for analysis of 2AP, using an extraction temperature of 40 °C and a stable isotopologue as internal standard. 2AP calibrations were linear between the concentrations of 53 and 5380 pg/g, with detection limits below the sensory threshold of 2AP. Forty-eight aromatic and nonaromatic, milled rice samples from three harvest years were screened with the method for their 2AP content, and overall reproducibility, observed for all samples, ranged from 5% for experimental aromatic lines to 33% for nonaromatic lines.

  6. Characterization of temporal coherence of hard X-ray free-electron laser pulses with single-shot interferograms

    Directory of Open Access Journals (Sweden)

    Taito Osaka

    2017-11-01

    Full Text Available Temporal coherence is one of the most fundamental characteristics of light, connecting to spectral information through the Fourier transform relationship between time and frequency. Interferometers with a variable path-length difference (PLD between the two branches have widely been employed to characterize temporal coherence properties for broad spectral regimes. Hard X-ray interferometers reported previously, however, have strict limitations in their operational photon energies, due to the specific optical layouts utilized to satisfy the stringent requirement for extreme stability of the PLD at sub-ångström scales. The work presented here characterizes the temporal coherence of hard X-ray free-electron laser (XFEL pulses by capturing single-shot interferograms. Since the stability requirement is drastically relieved with this approach, it was possible to build a versatile hard X-ray interferometer composed of six separate optical elements to cover a wide photon energy range from 6.5 to 11.5 keV while providing a large variable delay time of up to 47 ps at 10 keV. A high visibility of up to 0.55 was observed at a photon energy of 10 keV. The visibility measurement as a function of time delay reveals a mean coherence time of 5.9 ± 0.7 fs, which agrees with that expected from the single-shot spectral information. This is the first result of characterizing the temporal coherence of XFEL pulses in the hard X-ray regime and is an important milestone towards ultra-high energy resolutions at micro-electronvolt levels in time-domain X-ray spectroscopy, which will open up new opportunities for revealing dynamic properties in diverse systems on timescales from femtoseconds to nanoseconds, associated with fluctuations from ångström to nanometre spatial scales.

  7. Flexible Scheduling in Multimedia Kernels: An Overview

    NARCIS (Netherlands)

    Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.

    1999-01-01

    Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more

  8. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  9. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Effect of PVP as a capping agent in single reaction synthesis of nanocomposite soft/hard ferrite nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, H.A. [Department of Physics, Faculty of Science, Universiti Putra Malaysia, UPM, 43400 Serdang, Selangor (Malaysia); Saiden, N.M., E-mail: nlaily@upm.edu.my [Department of Physics, Faculty of Science, Universiti Putra Malaysia, UPM, 43400 Serdang, Selangor (Malaysia); Saion, E.; Azis, R.S.; Mamat, M.S. [Department of Physics, Faculty of Science, Universiti Putra Malaysia, UPM, 43400 Serdang, Selangor (Malaysia); Hashim, M. [Advanced Material and Nanotechnology Laboratory, Institute of Advanced Technology, Universiti Putra Malaysia, UPM, 43400 Serdang, Selangor (Malaysia)

    2017-04-15

    Nanocomposite magnets consist of soft and hard ferrite phases are known as an exchange spring magnet when they are sufficiently spin exchange coupled. Hard and soft ferrites offer high value of coercivity, H{sub c} and saturation magnetization, M{sub s} respectively. In order to obtain a better permanent magnet, both soft and hard ferrite phases need to be “exchange coupled”. The nanoparticles were prepared by a simple one-pot technique of 80% soft phase and 20% hard phase. This technique involves a single reaction mixture of metal nitrates and aqueous solution of varied amounts of polyvinylpyrrolidone (PVP). The heat treatment applied was at 800 °C for 3 h. The synthesized composites were characterized by Transmission Electron Microscope (TEM), Fourier Transform Infra-red (FT-IR), Energy Dispersive X-Ray (EDX), X-ray diffraction (XRD) and Vibrating sample magnetometer (VSM). The coexistence of two phases, Ni{sub 0.5}Zn{sub 0.5}Fe{sub 2}O{sub 4} and SrFe{sub 12}O{sub 19} were observed by XRD patterns. It also verified by the EDX that no impurities detected. The magnetic properties of nanocomposite ferrites for 0.06 g/ml PVP gives a better properties of H{sub c} 932 G and M{sub s} 39.0 emu/g with average particle size obtained from FESEM was 49.2 nm. The concentration of PVP used gives effect on the magnetic properties of the samples. - Highlights: • Amount of PVP play important roles in controlling the particle size distribution and magnetic properties. • This is a novel technique to produce nanocomposite ferrites effectively. • This study contributes better understanding on magnetic properties in nanoparticle composite magnets.

  11. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  12. Collision kernels in the eikonal approximation for Lennard-Jones interaction potential

    International Nuclear Information System (INIS)

    Zielinska, S.

    1985-03-01

    The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)

  13. Investigation of single defects created in crystals by laser emission and hard radiation

    International Nuclear Information System (INIS)

    Martynovich, E F; Dresvyanskiy, V P; Boychenko, S V; Rakevich, A L; Zilov, S A; Bagayev, S N

    2017-01-01

    The possibility of identifying radiation-created quantum systems via the characteristics of quantum trajectories of luminescence intensity measured on individual centers by confocal scanning fluorescence microscopy with the time-correlated single photon counting has been studied. Calculations of the quantum trajectories have been carried out by the density matrix method. Experimental studies have been carried out using a confocal microscope. (paper)

  14. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    Science.gov (United States)

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  15. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  16. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  17. Work Hard / Play Hard

    OpenAIRE

    Burrows, J.; Johnson, V.; Henckel, D.

    2016-01-01

    Work Hard / Play Hard was a participatory performance/workshop or CPD experience hosted by interdisciplinary arts atelier WeAreCodeX, in association with AntiUniversity.org. As a socially/economically engaged arts practice, Work Hard / Play Hard challenged employees/players to get playful, or go to work. 'The game changes you, you never change the game'. Employee PLAYER A 'The faster the better.' Employer PLAYER B

  18. Systematic study of radiation hardness of single crystal CVD diamond material investigated with an Au beam and IBIC method

    Energy Technology Data Exchange (ETDEWEB)

    Pietraszko, Jerzy; Koenig, Wolfgang; Traeger, Michael [GSI, Darmstadt (Germany); Draveny, Antoine; Galatyuk, Tetyana [TU, Darmstadt (Germany); Grilj, Veljko [RBI, Zagreb (Croatia); Collaboration: HADES-Collaboration

    2016-07-01

    For the future high rate CBM experiment at FAIR a radiation hard and fast beam detector is required. The detector has to perform precise T0 measurement (σ<50 ps) and should also offer decent beam monitoring capability. These tasks can be performed by utilizing single-crystal Chemical Vapor Deposition (ScCVD) diamond based detector. A prototype, segmented, detector have been constructed and the properties of this detector have been studied with a high current density beam (about 3.10{sup 6}/s/mm{sup 2}) of 1.23 A GeV Au ions in HADES. The irradiated detector properties have been studied at RBI in Zagreb by means of IBIC method. Details of the design, the intrinsic properties of the detectors and their performance after irradiation with such beam are reported.

  19. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  20. Novel QCD Aspects of Hard Diffraction,Antishadowing, and Single-Spin Asymmetries

    Energy Technology Data Exchange (ETDEWEB)

    Brodsky, S.

    2004-10-15

    It is usually assumed--following the parton model--that the leading-twist structure functions measured in deep inelastic lepton-proton scattering are simply the probability distributions for finding quarks and gluons in the target nucleon. In fact, gluon exchange between the outgoing quarks and the target spectators effects the leading-twist structure functions in a profound way, leading to diffractive leptoproduction processes, shadowing and antishadowing of nuclear structure functions, and target spin asymmetries, physics not incorporated in the light-front wavefunctions of the target computed in isolation. In particular, final-state interactions from gluon exchange lead to single-spin asymmetries in semi-inclusive deep inelastic lepton-proton scattering which are not power-law suppressed in the Bjorken limit. The shadowing and antishadowing of nuclear structure functions in the Gribov-Glauber picture is due respectively to the destructive and constructive interference of amplitudes arising from the multiple-scattering of quarks in the nucleus. The effective quark-nucleon scattering amplitude includes Pomeron and Odderon contributions from multi-gluon exchange as well as Reggeon quark-exchange contributions. Part of the anomalous NuTeV result for sin{sup 2} {theta}{sub W} could be due to the non-universality of nuclear antishadowing for charged and neutral currents. Detailed measurements of the nuclear dependence of individual quark structure functions are thus needed to establish the distinctive phenomenology of shadowing and antishadowing and to make the NuTeV results definitive. I also discuss diffraction dissociation as a tool for resolving hadron substructure Fock state by Fock state and for producing leading heavy quark systems.

  1. A Shack-Hartmann Sensor for Single-Shot Multi-Contrast Imaging with Hard X-rays

    Directory of Open Access Journals (Sweden)

    Tomy dos Santos Rolo

    2018-05-01

    Full Text Available An array of compound refractive X-ray lenses (CRL with 20 × 20 lenslets, a focal distance of 20cm and a visibility of 0.93 is presented. It can be used as a Shack-Hartmann sensor for hard X-rays (SHARX for wavefront sensing and permits for true single-shot multi-contrast imaging the dynamics of materials with a spatial resolution in the micrometer range, sensitivity on nanosized structures and temporal resolution on the microsecond scale. The object’s absorption and its induced wavefront shift can be assessed simultaneously together with information from diffraction channels. In contrast to the established Hartmann sensors the SHARX has an increased flux efficiency through focusing of the beam rather than blocking parts of it. We investigated the spatiotemporal behavior of a cavitation bubble induced by laser pulses. Furthermore, we validated the SHARX by measuring refraction angles of a single diamond CRL, where we obtained an angular resolution better than 4 μ rad.

  2. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  3. Ordered mesoporous crystalline gamma-Al2O3 with variable architecture and porosity from a single hard template.

    Science.gov (United States)

    Wu, Zhangxiong; Li, Qiang; Feng, Dan; Webley, Paul A; Zhao, Dongyuan

    2010-09-01

    In this paper, an efficient route is developed for controllable synthesis of ordered mesoporous alumina (OMA) materials with variable pore architectures and high mesoporosity, as well as crystalline framework. The route is based on the nanocasting pathway with bimodal mesoporous carbon as the hard template. In contrast to conventional reports, we first realize the possibility of creating two ordered mesopore architectures by using a single carbon hard template obtained from organic-organic self-assembly, which is also the first time such carbon materials are adopted to replicate ordered mesoporous materials. The mesopore architecture and surface property of the carbon template are rationally designed in order to obtain ordered alumina mesostructures. We found that the key factors rely on the unique bimodal mesopore architecture and surface functionalization of the carbon hard template. Namely, the bimodal mesopores (2.3 and 5.9 nm) and the surface functionalities make it possible to selectively load alumina into the small mesopores dominantly and/or with a layer of alumina coated on the inner surface of the large primary mesopores with different thicknesses until full loading is achieved. Thus, OMA materials with variable pore architectures (similar and reverse mesostructures relative to the carbon template) and controllable mesoporosity in a wide range are achieved. Meanwhile, in situ ammonia hydrolysis for conversion of the metal precursor to its hydroxide is helpful for easy crystallization (as low as approximately 500 degrees C). Well-crystallized alumina frameworks composed of gamma-Al(2)O(3) nanocrystals with sizes of 6-7 nm are obtained after burning out the carbon template at 600 degrees C, which is advantageous over soft-templated aluminas. The effects of synthesis factors are demonstrated and discussed relative to control experiments. Furthermore, our method is versatile enough to be used for general synthesis of other important but difficult

  4. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  5. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  6. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  7. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  8. Observation of Ortho-III correlations by neutron and hard x-ray scattering in an untwinned YBa2Cu3O6.77 single crystal

    DEFF Research Database (Denmark)

    Schleger, P.; Casalta, H.; Hadfield, R.

    1995-01-01

    We present measurements of Ortho-III phase correlations in an untwinned single crystal of YBa2Cu3O6.77 by neutron scattering and the novel method of hard (95 keV) X-ray scattering. The Ortho-III ordering is essentially two-dimensional, exhibiting Lorentzian peak shapes in the a-b plane. At room...

  9. Comparison of single and mixed ion implantation effects on the changes of the surface hardness, light transmittance, and electrical conductivity of polymeric materials

    International Nuclear Information System (INIS)

    Park, J. W.; Lee, J. H.; Lee, J. S.; Kil, J. G.; Choi, B. H.; Han, Z. H.

    2001-01-01

    Single or mixed ions of N, He, C were implanted onto the transparent PET(Polyethylen Terephtalate) with the ion energies of less than 100 keV and the surface hardness, light transmittance and electrical conductivity were examined. As measured with nanoindentation, mixed ion implantations such as N + +He + or N + + C + exhibited more increase in the surface hardness than the single ion implantation. Especially, implantation of C+N ions increased the surface hardness by about three times as compared to the implantation of N ion alone, which means more than 10 times increase than the untreated PET. Surface electrical conductivity was increased along with the hardness increase. The conductivity increase was more proportional to the hardness when used the higher ion energy and ion dose, while it did not show any relationship at as low as 50 keV of ion energy. The light at the 550 nm wavelength (visual range) transmitted more than 85%, which is close to that of as-received PET, and at the wavelength below 300 nm(UV range) the rays were absorbed more than 95% as traveling through the sheet, implying that there are processing parameters which the ion implanted PET maintains the transparency and absorbs the UV rays

  10. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  11. Hard electronics; Hard electronics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Hard material technologies were surveyed to establish the hard electronic technology which offers superior characteristics under hard operational or environmental conditions as compared with conventional Si devices. The following technologies were separately surveyed: (1) The device and integration technologies of wide gap hard semiconductors such as SiC, diamond and nitride, (2) The technology of hard semiconductor devices for vacuum micro- electronics technology, and (3) The technology of hard new material devices for oxides. The formation technology of oxide thin films made remarkable progress after discovery of oxide superconductor materials, resulting in development of an atomic layer growth method and mist deposition method. This leading research is expected to solve such issues difficult to be easily realized by current Si technology as high-power, high-frequency and low-loss devices in power electronics, high temperature-proof and radiation-proof devices in ultimate electronics, and high-speed and dense- integrated devices in information electronics. 432 refs., 136 figs., 15 tabs.

  12. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  13. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  14. Mapping quantitative trait loci for a unique 'super soft' kernel trait in soft white wheat

    Science.gov (United States)

    Wheat (Triticum sp.) kernel texture is an important factor affecting milling, flour functionality, and end-use quality. Kernel texture is normally characterized as either hard or soft, the two major classes of texture. However, further variation is typically encountered in each class. Soft wheat var...

  15. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  16. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  17. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  18. Irradiation of zinc single crystal with 500 keV singly-charged carbon ions: surface morphology, structure, hardness, and chemical modifications

    Science.gov (United States)

    Waqas Khaliq, M.; Butt, M. Z.; Saleem, Murtaza

    2017-07-01

    Cylindrical specimens of (1 0 4) oriented zinc single crystal (diameter  =  6 mm and length  =  5 mm) were irradiated with 500 keV C+1 ions with the help of a Pelletron accelerator. Six specimens were irradiated in an ultra-high vacuum (~10‒8 Torr) with different ion doses, namely 3.94  ×  1014, 3.24  ×  1015, 5.33  ×  1015, 7.52  ×  1015, 1.06  ×  1016, and 1.30  ×  1016 ions cm-2. A field emission scanning electron microscope (FESEM) was utilized for the morphological study of the irradiated specimens. Formation of nano- and sub-micron size rods, clusters, flower- and fork-like structures, etc, was observed. Surface roughness of the irradiated specimens showed an increasing trend with the ions dose. Energy dispersive x-ray spectroscopy (EDX) helped to determine chemical modifications in the specimens. It was found that carbon content varied in the range 22.86-31.20 wt.% and that oxygen content was almost constant, with an average value of 10.16 wt.%. The balance content was zinc. Structural parameters, i.e. crystallite size and lattice strain, were determined by Williamson-Hall analysis using x-ray diffraction (XRD) patterns of the irradiated specimens. Both crystallite size and lattice strain showed a decreasing trend with the increasing ions dose. A good linear relationship between crystallite size and lattice strain was observed. Surface hardness depicted a decreasing trend with the ions dose and followed an inverse Hall-Petch relation. FTIR spectra of the specimens revealed that absorption bands gradually diminish as the dose of singly-charged carbon ions is increased from 3.94  ×  1014 ions cm-1 to 1.30  ×  1016 ions cm-1. This indicates progressive deterioration of chemical bonds with the increase in ion dose.

  19. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  20. Soft Sensing of Key State Variables in Fermentation Process Based on Relevance Vector Machine with Hybrid Kernel Function

    Directory of Open Access Journals (Sweden)

    Xianglin ZHU

    2014-06-01

    Full Text Available To resolve the online detection difficulty of some important state variables in fermentation process with traditional instruments, a soft sensing modeling method based on relevance vector machine (RVM with a hybrid kernel function is presented. Based on the characteristic analysis of two commonly-used kernel functions, that is, local Gaussian kernel function and global polynomial kernel function, a hybrid kernel function combing merits of Gaussian kernel function and polynomial kernel function is constructed. To design optimal parameters of this kernel function, the particle swarm optimization (PSO algorithm is applied. The proposed modeling method is used to predict the value of cell concentration in the Lysine fermentation process. Simulation results show that the presented hybrid-kernel RVM model has a better accuracy and performance than the single kernel RVM model.

  1. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  2. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  3. Kernel structures for Clouds

    Science.gov (United States)

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  4. Comparison of soft and hard-switching effiency in a three-level single phase 60kW dc-ac converter

    DEFF Research Database (Denmark)

    Munk-Nielsen, Stig; Teodorescu, Remus; Bech, Michael Møller

    2003-01-01

    Efficiency measurements on a three-level single-phase soft-switched converter are presented and show a slightly improved efficiency compared with the hard-switched converter for output powers higher than 25 % of rated power. The resonant converter switches are Zero Voltage Switched (ZVS......) and a simple resonant circuit is used. Increased resonant converter efficiency enables a reduction in the semiconductor size pr. watt output power or an increase the switching frequency....

  5. Flexible Scheduling by Deadline Inheritance in Soft Real Time Kernels

    NARCIS (Netherlands)

    Jansen, P.G.; Wygerink, Emiel

    1996-01-01

    Current Hard Real Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes HRT scheduling techniques inadequate for use in Soft Real Time (SRT) environment where we can make a considerable profit by a better and more

  6. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  7. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  8. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  9. Beyond the single-file fluid limit using transfer matrix method: Exact results for confined parallel hard squares

    International Nuclear Information System (INIS)

    Gurin, Péter; Varga, Szabolcs

    2015-01-01

    We extend the transfer matrix method of one-dimensional hard core fluids placed between confining walls for that case where the particles can pass each other and at most two layers can form. We derive an eigenvalue equation for a quasi-one-dimensional system of hard squares confined between two parallel walls, where the pore width is between σ and 3σ (σ is the side length of the square). The exact equation of state and the nearest neighbor distribution functions show three different structures: a fluid phase with one layer, a fluid phase with two layers, and a solid-like structure where the fluid layers are strongly correlated. The structural transition between differently ordered fluids develops continuously with increasing density, i.e., no thermodynamic phase transition occurs. The high density structure of the system consists of clusters with two layers which are broken with particles staying in the middle of the pore

  10. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  11. Steerability of Hermite Kernel

    Czech Academy of Sciences Publication Activity Database

    Yang, Bo; Flusser, Jan; Suk, Tomáš

    2013-01-01

    Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf

  12. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Science.gov (United States)

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  13. Exploration of Shorea robusta (Sal seeds, kernels and its oil

    Directory of Open Access Journals (Sweden)

    Shashi Kumar C.

    2016-12-01

    Full Text Available Physical, mechanical, and chemical properties of Shorea robusta seed with wing, seed without wing, and kernel were investigated in the present work. The physico-chemical composition of sal oil was also analyzed. The physico-mechanical properties and proximate composition of seed with wing, seed without wing, and kernel at three moisture contents of 9.50% (w.b, 9.54% (w.b, and 12.14% (w.b, respectively, were studied. The results show that the moisture content of the kernel was highest as compared to seed with wing and seed without wing. The sphericity of the kernel was closer to that of a sphere as compared to seed with wing and seed without wing. The hardness of the seed with wing (32.32, N/mm and seed without wing (42.49, N/mm was lower than the kernels (72.14, N/mm. The proximate composition such as moisture, protein, carbohydrates, oil, crude fiber, and ash content were also determined. The kernel (30.20%, w/w contains higher oil percentage as compared to seed with wing and seed without wing. The scientific data from this work are important for designing of equipment and processes for post-harvest value addition of sal seeds.

  14. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  15. Comparison of three types of XPAD3.2/CdTe single chip hybrids for hard X-ray applications in material science and biomedical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Buton, C., E-mail: clement.buton@synchrotron-soleil.fr [Synchrotron SOLEIL, L´Orme des Merisiers, Saint-Aubin — BP 48 91192, Gif-sur-Yvette Cedex (France); Dawiec, A. [Synchrotron SOLEIL, L´Orme des Merisiers, Saint-Aubin — BP 48 91192, Gif-sur-Yvette Cedex (France); Graber-Bolis, J.; Arnaud, K. [CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille (France); Bérar, J.F.; Blanc, N.; Boudet, N. [Université Grenoble Alpes, Institut NÉEL, F-38042 Grenoble (France); CNRS, Institut NÉEL, F-38042 Grenoble (France); Clémens, J.C.; Debarbieux, F. [CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille (France); Delpierre, P.; Dinkespiler, B. [imXPAD SAS — Espace Mistral, Athélia IV, 297 avenue du Mistral, 13600 La Ciotat (France); Gastaldi, T. [CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille (France); Hustache, S. [Synchrotron SOLEIL, L´Orme des Merisiers, Saint-Aubin — BP 48 91192, Gif-sur-Yvette Cedex (France); Morel, C.; Pangaud, P. [CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille (France); Perez-Ponce, H. [imXPAD SAS — Espace Mistral, Athélia IV, 297 avenue du Mistral, 13600 La Ciotat (France); Vigeolas, E. [CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille (France)

    2014-09-11

    The CHIPSPECT consortium aims at building a large multi-modules CdTe based photon counting detector for hard X-ray applications. For this purpose, we tested nine XPAD3.2 single chip hybrids in various configurations (i.e. Ohmic vs. Schottky contacts or electrons vs. holes collection mode) in order to select the most performing and best suited configuration for our experimental requirements. Measurements have been done using both X-ray synchrotron beams and {sup 241}Am source. Preliminary results on the image quality, calibration, stability, homogeneity and linearity of the different types of detectors are presented.

  16. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  17. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  18. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  19. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...

  20. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  1. Comment on "Revisiting the definition of local hardness and hardness kernel" by C. A. Polanco-Ramirez, M. Franco-Pérez, J. Carmona-Espíndola, J. L. Gázquez and P. W. Ayers, Phys. Chem. Chem. Phys., 2017, 19, 12355.

    Science.gov (United States)

    Guégan, F; Lamine, W; Chermette, H; Morell, C

    2018-03-28

    In a recent article Polanco-Ramirez et al. proposed new definitions of local chemical potential and local hardness starting from the first derivative of the energy with respect to the number of electrons and a smart use of the chain rule. In this comment we show that this derivation appears naturally in the Taylor expansion of the energy, showing that the construction of Polanco-Ramirez et al. is not artificially built.

  2. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  3. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  4. Robotic intelligence kernel

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  5. Development of Compton X-ray spectrometer for high energy resolution single-shot high-flux hard X-ray spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Kojima, Sadaoki, E-mail: kojima-s@ile.osaka-u.ac.jp, E-mail: sfujioka@ile.osaka-u.ac.jp; Ikenouchi, Takahito; Arikawa, Yasunobu; Sakata, Shohei; Zhang, Zhe; Abe, Yuki; Nakai, Mitsuo; Nishimura, Hiroaki; Shiraga, Hiroyuki; Fujioka, Shinsuke, E-mail: kojima-s@ile.osaka-u.ac.jp, E-mail: sfujioka@ile.osaka-u.ac.jp; Azechi, Hiroshi [Institute of Laser Engineering, Osaka University, 2-6 Yamada-oka, Suita, Osaka 565-0871 (Japan); Ozaki, Tetsuo [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Miyamoto, Shuji; Yamaguchi, Masashi; Takemoto, Akinori [Laboratory of Advanced Science and Technology for Industry, University of Hyogo, 3-1-2 Kouto, Kamigori-cho, Ako-gun, Hyogo 678-1205 (Japan)

    2016-04-15

    Hard X-ray spectroscopy is an essential diagnostics used to understand physical processes that take place in high energy density plasmas produced by intense laser-plasma interactions. A bundle of hard X-ray detectors, of which the responses have different energy thresholds, is used as a conventional single-shot spectrometer for high-flux (>10{sup 13} photons/shot) hard X-rays. However, high energy resolution (Δhv/hv < 0.1) is not achievable with a differential energy threshold (DET) X-ray spectrometer because its energy resolution is limited by energy differences between the response thresholds. Experimental demonstration of a Compton X-ray spectrometer has already been performed for obtaining higher energy resolution than that of DET spectrometers. In this paper, we describe design details of the Compton X-ray spectrometer, especially dependence of energy resolution and absolute response on photon-electron converter design and its background reduction scheme, and also its application to the laser-plasma interaction experiment. The developed spectrometer was used for spectroscopy of bremsstrahlung X-rays generated by intense laser-plasma interactions using a 200 μm thickness SiO{sub 2} converter. The X-ray spectrum obtained with the Compton X-ray spectrometer is consistent with that obtained with a DET X-ray spectrometer, furthermore higher certainly of a spectral intensity is obtained with the Compton X-ray spectrometer than that with the DET X-ray spectrometer in the photon energy range above 5 MeV.

  6. Soft and Hard Tissue Changes Following Immediate Placement or Immediate Restoration of Single-Tooth Implants in the Esthetic Zone: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Yan, Qi; Xiao, Li-Qun; Su, Mei-Ying; Mei, Yan; Shi, Bin

    This systematic review aimed to compare immediate protocols with conventional protocols of single-tooth implants in terms of changes in the surrounding hard and soft tissue in the esthetic area. Electronic and manual searches were performed in PubMed, EMBASE, Cochrane, and other data systems for research articles published between January 2001 and December 2014. Only randomized controlled trials (RCTs) reporting on hard and or soft tissue characteristics following a single-tooth implant were included. Based on the protocol used in each study, the included studies were categorized into three groups to assess the relationships between the factors and related esthetic indexes. Variables such as marginal bone level changes (mesial, distal, and mean bone level), peri-implant soft tissue changes (papilla level, midbuccal mucosa, and probing depth), and other esthetic indices were taken into consideration. The data were analyzed using RevMan version 5.3, Stata 12, and GRADEpro 3.6.1 software. A total of 13 RCTs met the inclusion criteria. Four studies examined immediate implant placement, five studies examined immediate implant restoration, and four studies examined immediate loading. Comparing the bone level changes following immediate and conventional restoration, no significant differences were found in the bone level of the mesial site (standard mean difference [SMD] = -0.04 mm; 95% confidence interval [CI]: -0.25 to 0.17 mm), the distal site (SMD = -0.15 mm; 95% CI: -0.38 to 0.09 mm), and the mean bone level changes (SMD = 0.05 mm; 95% CI: -0.18 to 0.27 mm). The difference in the marginal bone level changes between immediate and conventional loading was also not statistically significant (SMD = -0.05 mm; 95% CI: -0.15 to 0.06 mm for the mesial site and SMD = -0.02 mm; 95% CI: -0.09 to 0.05 mm for the distal site). Soft tissue changes following immediate and conventional restoration reported no significant differences in the papillae level of the mesial site (SMD = 0

  7. Wheat Quality Council, Hard Spring Wheat Technical Committee, 2017 Crop

    Science.gov (United States)

    Nine experimental lines of hard spring wheat were grown at up to six locations in 2017 and evaluated for kernel, milling, and bread baking quality against the check variety Glenn. Wheat samples were submitted through the Wheat Quality Council and processed and milled at the USDA-ARS Hard Red Spring...

  8. Dispersion representations for hard exclusive processes. Beyond the born approximation

    International Nuclear Information System (INIS)

    Diehl, M.; Ivanov, D.Yu.

    2007-07-01

    Several hard exclusive scattering processes admit a description in terms of generalized parton distributions and perturbative hard-scattering kernels. Both the physical amplitude and the hard-scattering kernels fulfill dispersion relations. We give a detailed investigation of their consistency at all orders in perturbation theory. The results shed light on the information about generalized parton distributions that can be extracted from the real and imaginary parts of exclusive amplitudes. They also provide a practical consistency check for models of these distributions in which Lorentz invariance is not exactly satisfied. (orig.)

  9. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  10. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  11. Alignment of single-case design (SCD) research with individuals who are deaf or hard of hearing with the what Works Clearinghouse standards for SCD research.

    Science.gov (United States)

    Wendel, Erica; Cawthon, Stephanie W; Ge, Jin Jin; Beretvas, S Natasha

    2015-04-01

    The authors assessed the quality of single-case design (SCD) studies that assess the impact of interventions on outcomes for individuals who are deaf or hard-of-hearing (DHH). More specifically, the What Works Clearinghouse (WWC) standards for SCD research were used to assess design quality and the strength of evidence of peer-reviewed studies available in the peer-reviewed, published literature. The analysis yielded four studies that met the WWC standards for design quality, of which two demonstrated moderate to strong evidence for efficacy of the studied intervention. Results of this review are discussed in light of the benefits and the challenges to applying the WWC design standards to research with DHH individuals and other diverse, low-incidence populations. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Single-Case Design Research: Building the Evidence-Base in the Field of Education of Deaf and Hard of Hearing Students.

    Science.gov (United States)

    Cannon, Joanna E; Guardino, Caroline; Antia, Shirin D; Luckner, John L

    2016-01-01

    The field of education of deaf and hard of hearing (DHH) students has a paucity of evidence-based practices (EBPs) to guide instruction. The authors discussed how the research methodology of single-case design (SCD) can be used to build EBPs through direct and systematic replication of studies. An overview of SCD research methods is presented, including an explanation of how internal and external validity issues are addressed, and why SCD is appropriate for intervention research with DHH children. The authors then examine the SCD research in the field according to quality indicators (QIs; at the individual level and as a body of evidence) to determine the existing evidence base. Finally, future replication areas are recommended to fill the gaps in SCD research with students who are DHH in order to add to the evidence base in the field.

  13. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  14. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  15. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  16. Proteome analysis of the almond kernel (Prunus dulcis).

    Science.gov (United States)

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  17. Hard X-ray MCD in GdNi/sub 5/ and TbNi/sub 5/ single crystals

    CERN Document Server

    Galera, R M

    1999-01-01

    XMCD experiments have been performed at the R L/sub 2,3/ and Ni K- edges on magnetically saturated single crystals of GdNi/sub 5/ and Tb Ni/sub 5/ ferromagnetic compounds. The spectra present huge and well structured dichroic $9 signals at both the R L/sub 2,3/ and the Ni K- edges. Structures from the quadrupolar (2p to 4f) transitions are clearly observed at the R L/sub 2,3/-edges. Though Ni is not magnetic, large intensities, up to 0.4, are measured at the $9 Ni K- edge. The Ni K-edge XMCD shows a three-peak structure which intensities dependent on the rare earth. (7 refs).

  18. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  19. kernel oil by lipolytic organisms

    African Journals Online (AJOL)

    USER

    2010-08-02

    Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.

  20. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  2. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  3. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    Directory of Open Access Journals (Sweden)

    Ujjwal Maulik

    Full Text Available Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request.sarkar@labri.fr.

  4. Option Valuation with Volatility Components, Fat Tails, and Non-Monotonic Pricing Kernels

    DEFF Research Database (Denmark)

    Babaoglu, Kadir; Christoffersen, Peter; Heston, Steven L.

    We nest multiple volatility components, fat tails and a U-shaped pricing kernel in a single option model and compare their contribution to describing returns and option data. All three features lead to statistically significant model improvements. A U-shaped pricing kernel is economically most im...

  5. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  6. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  7. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  8. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  9. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  10. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  11. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  12. Optimizing memory-bound SYMV kernel on GPU hardware accelerators

    KAUST Repository

    Abdelfattah, Ahmad; Dongarra, Jack; Keyes, David E.; Ltaief, Hatem

    2013-01-01

    and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double

  13. Batched Triangular Dense Linear Algebra Kernels for Very Small Matrix Sizes on GPUs

    KAUST Repository

    Charara, Ali; Keyes, David E.; Ltaief, Hatem

    2017-01-01

    Batched dense linear algebra kernels are becoming ubiquitous in scientific applications, ranging from tensor contractions in deep learning to data compression in hierarchical low-rank matrix approximation. Within a single API call, these kernels are capable of simultaneously launching up to thousands of similar matrix computations, removing the expensive overhead of multiple API calls while increasing the occupancy of the underlying hardware. A challenge is that for the existing hardware landscape (x86, GPUs, etc.), only a subset of the required batched operations is implemented by the vendors, with limited support for very small problem sizes. We describe the design and performance of a new class of batched triangular dense linear algebra kernels on very small data sizes using single and multiple GPUs. By deploying two-sided recursive formulations, stressing the register usage, maintaining data locality, reducing threads synchronization and fusing successive kernel calls, the new batched kernels outperform existing state-of-the-art implementations.

  14. Batched Triangular Dense Linear Algebra Kernels for Very Small Matrix Sizes on GPUs

    KAUST Repository

    Charara, Ali

    2017-03-06

    Batched dense linear algebra kernels are becoming ubiquitous in scientific applications, ranging from tensor contractions in deep learning to data compression in hierarchical low-rank matrix approximation. Within a single API call, these kernels are capable of simultaneously launching up to thousands of similar matrix computations, removing the expensive overhead of multiple API calls while increasing the occupancy of the underlying hardware. A challenge is that for the existing hardware landscape (x86, GPUs, etc.), only a subset of the required batched operations is implemented by the vendors, with limited support for very small problem sizes. We describe the design and performance of a new class of batched triangular dense linear algebra kernels on very small data sizes using single and multiple GPUs. By deploying two-sided recursive formulations, stressing the register usage, maintaining data locality, reducing threads synchronization and fusing successive kernel calls, the new batched kernels outperform existing state-of-the-art implementations.

  15. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  16. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  17. CdTe Timepix detectors for single-photon spectroscopy and linear polarimetry of high-flux hard x-ray radiation.

    Science.gov (United States)

    Hahn, C; Weber, G; Märtin, R; Höfer, S; Kämpfer, T; Stöhlker, Th

    2016-04-01

    Single-photon spectroscopy of pulsed, high-intensity sources of hard X-rays - such as laser-generated plasmas - is often hampered by the pileup of several photons absorbed by the unsegmented, large-volume sensors routinely used for the detection of high-energy radiation. Detectors based on the Timepix chip, with a segmentation pitch of 55 μm and the possibility to be equipped with high-Z sensor chips, constitute an attractive alternative to commonly used passive solutions such as image plates. In this report, we present energy calibration and characterization measurements of such devices. The achievable energy resolution is comparable to that of scintillators for γ spectroscopy. Moreover, we also introduce a simple two-detector Compton polarimeter setup with a polarimeter quality of (98 ± 1)%. Finally, a proof-of-principle polarimetry experiment is discussed, where we studied the linear polarization of bremsstrahlung emitted by a laser-driven plasma and found an indication of the X-ray polarization direction depending on the polarization state of the incident laser pulse.

  18. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  19. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  20. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  1. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  2. Complex use of cottonseed kernels

    Energy Technology Data Exchange (ETDEWEB)

    Glushenkova, A I

    1977-01-01

    A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.

  3. Kernel regression with functional response

    OpenAIRE

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  4. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  5. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  6. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  7. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  8. Influence of soft kernel texture on the flour and baking quality of durum wheat

    Science.gov (United States)

    Durum wheat is predominantly grown in semi-arid to arid environments where common wheat does not flourish, especially in the Middle East, North Africa, Mediterranean Basin, and portions of North America. Durum kernels are extraordinarily hard when compared to their common wheat counterparts. Due to ...

  9. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  10. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  11. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  12. Elastic modulus, hardness and fracture behavior of Pb(Zn1/3Nb2/3)O3-PbTiO3 single crystal

    International Nuclear Information System (INIS)

    Zeng Kaiyang; Pang Yongsong; Shen Lu; Rajan, K.K.; Lim, Leong-Chew

    2008-01-01

    The deformation, crack initiation, fracture behavior and mechanical properties of (0 0 1)-oriented single crystal of Pb(Zn 1/3 Nb 2/3 )O 3 -7% PbTiO 3 (PZN-7% PT) in both unpoled and poled states have been investigated by using nanoindentation, micro-indentation and three-point bending experiments. Nanoindentation experiments revealed that, unlike typical brittle materials, material pile-ups around the indentation impressions were commonly observed at ultra-low loads. The elastic modulus and hardness were also determined by using nanoindentation experiments. The critical indentation load for crack initiation, determined by using micro-indentation experiments, is 0.135 N for unpoled samples, increasing to 0.465 N for the positive surface (crack propagation direction against the poling direction) of poled samples but decreasing slightly to 0.132 N for the negative surface (crack propagation direction along the poling direction) of the poled samples. Indentation/strength (three-point bend) test showed a similar trend for the 'apparent' fracture toughness, giving 0.36 MPa√m for unpoled samples, increasing to 0.44 MPa√m for the positive surface of poled samples but decreasing to 0.30 MPa√m for the negative surface of poled samples. Polarized light microscopy and scanning electron microscopy were used to study the material adjacent to the indentations and the fracture surfaces produced by the three-point bend tests. The results were correlated with the various fracture properties observed

  13. Caught between a rock and a hard place: An intrinsic single case study of nurse researchers' experiences of the presence of a nursing research culture in clinical practice.

    Science.gov (United States)

    Berthelsen, Connie Bøttcher; Hølge-Hazelton, Bibi

    2018-04-01

    To explore how nurse researchers in clinical positions experience the presence of a nursing research culture in clinical practice. Higher demands in the hospitals for increasing the quality of patient care engender a higher demand for the skills of health professionals and evidence-based practice. However, the utilisation of nursing research in clinical practice is still limited. Intrinsic single case study design underlined by a constructivist perspective. Data were produced through a focus group interview with seven nurse researchers employed in clinical practice in two university hospitals in Zealand, Denmark, to capture the intrinsic aspects of the concept of nursing research culture in the context of clinical practice. A thematic analysis was conducted based on Braun and Clarke's theoretical guideline. "Caught between a rock and a hard place" was constructed as the main theme describing how nurse researchers in clinical positions experience the presence of a nursing research culture in clinical practice. The main theme was supported by three subthemes: Minimal academic tradition affects nursing research; Minimal recognition from physicians affects nursing research; and Moving towards a research culture. The nurse researchers in this study did not experience the presence of a nursing research culture in clinical practice, however; they called for more attention on removing barriers against research utilisation, promotion of applied research and interdisciplinary research collaboration, and passionate management support. The results of this case study show the pressure which nurse researchers employed in clinical practice are exposed to, and give examples on how to accommodate the further development of a nursing research culture in clinical practice. © 2017 John Wiley & Sons Ltd.

  14. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    Science.gov (United States)

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  15. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  17. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  18. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  19. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  20. Standard hardness conversion tables for metals relationship among brinell hardness, vickers hardness, rockwell hardness, superficial hardness, knoop hardness, and scleroscope hardness

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 Conversion Table 1 presents data in the Rockwell C hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.2 Conversion Table 2 presents data in the Rockwell B hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.3 Conversion Table 3 presents data on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, and Knoop hardness of nickel and high-nickel alloys (nickel content o...

  1. Hard coatings

    International Nuclear Information System (INIS)

    Dan, J.P.; Boving, H.J.; Hintermann, H.E.

    1993-01-01

    Hard, wear resistant and low friction coatings are presently produced on a world-wide basis, by different processes such as electrochemical or electroless methods, spray technologies, thermochemical, CVD and PVD. Some of the most advanced processes, especially those dedicated to thin film depositions, basically belong to CVD or PVD technologies, and will be looked at in more detail. The hard coatings mainly consist of oxides, nitrides, carbides, borides or carbon. Over the years, many processes have been developed which are variations and/or combinations of the basic CVD and PVD methods. The main difference between these two families of deposition techniques is that the CVD is an elevated temperature process (≥ 700 C), while the PVD on the contrary, is rather a low temperature process (≤ 500 C); this of course influences the choice of substrates and properties of the coating/substrate systems. Fundamental aspects of the vapor phase deposition techniques and some of their influences on coating properties will be discussed, as well as the very important interactions between deposit and substrate: diffusions, internal stress, etc. Advantages and limitations of CVD and PVD respectively will briefly be reviewed and examples of applications of the layers will be given. Parallel to the development and permanent updating of surface modification technologies, an effort was made to create novel characterisation methods. A close look will be given to the coating adherence control by means of the scratch test, at the coating hardness measurement by means of nanoindentation, at the coating wear resistance by means of a pin-on-disc tribometer, and at the surface quality evaluation by Atomic Force Microscopy (AFM). Finally, main important trends will be highlighted. (orig.)

  2. Validation of Born Traveltime Kernels

    Science.gov (United States)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  3. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  4. Kernel Bayesian ART and ARTMAP.

    Science.gov (United States)

    Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan

    2018-02-01

    Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    Science.gov (United States)

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  6. A framework for dense triangular matrix kernels on various manycore architectures

    KAUST Repository

    Charara, Ali

    2017-06-06

    We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar\\'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.

  7. A framework for dense triangular matrix kernels on various manycore architectures

    KAUST Repository

    Charara, Ali; Keyes, David E.; Ltaief, Hatem

    2017-01-01

    We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.

  8. Light energy transmission and Vickers hardness ratio of bulk-fill resin based composites at different thicknesses cured by a dual-wave or a single-wave light curing unit.

    Science.gov (United States)

    Santini, Ario; Naaman, Reem Khalil; Aldossary, Mohammed Saeed

    2017-04-01

    To quantify light energy transmission through two bulk-fill resin-based composites and to measure the top to bottom surface Vickers hardness ratio (VHratio) of samples of various incremental thicknesses, using either a single-wave or dual-wave light curing unit (LCU). Tetric EvoCeram Bulk Fill (TECBF) and SonicFill (SF) were studied. Using MARC-RC, the irradiance delivered to the top surface of the samples 2, 3, 4 and 5 mm thick (n= 5 for each thickness) was adjusted to 800 mW/cm2 for 20 seconds (16 J/cm2) using either a single-wave, Bluephase or a dual-wave, Bluephase G2 LCUs. Light energy transmission through to the bottom surface of the specimens was measured at real time using MARC-RC. The Vickers hardness (VH) was determined using Vickers micro hardness tester and the VHratio was calculated. Data were analyzed using a general linear model in Minitab 16; α= 0.05. TECBF was more translucent than SF (Pwave Bluephase G2). SF showed significantly higher VH ratio than TECBF at all different thickness levels (P 0.05). TECBF showed significantly greater VH ratio when cured with the single-wave Bluephase than when using the dual-wave Bluephase G2 (Penergy through to the bottom surface and the VHratio are material dependent. Although TECBF is more translucent than SF, it showed lower VHratio compared to SF when cured with dual-wave Bluephase G2.

  9. Optimizing Multiple Kernel Learning for the Classification of UAV Data

    Directory of Open Access Journals (Sweden)

    Caroline M. Gevaert

    2016-12-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing high-quality orthoimagery and 3D information in the form of point clouds at a relatively low cost. Their increasing popularity stresses the necessity of understanding which algorithms are especially suited for processing the data obtained from UAVs. The features that are extracted from the point cloud and imagery have different statistical characteristics and can be considered as heterogeneous, which motivates the use of Multiple Kernel Learning (MKL for classification problems. In this paper, we illustrate the utility of applying MKL for the classification of heterogeneous features obtained from UAV data through a case study of an informal settlement in Kigali, Rwanda. Results indicate that MKL can achieve a classification accuracy of 90.6%, a 5.2% increase over a standard single-kernel Support Vector Machine (SVM. A comparison of seven MKL methods indicates that linearly-weighted kernel combinations based on simple heuristics are competitive with respect to computationally-complex, non-linear kernel combination methods. We further underline the importance of utilizing appropriate feature grouping strategies for MKL, which has not been directly addressed in the literature, and we propose a novel, automated feature grouping method that achieves a high classification accuracy for various MKL methods.

  10. Characterisation and final disposal behaviour of theoria-based fuel kernels in aqueous phases

    International Nuclear Information System (INIS)

    Titov, M.

    2005-08-01

    Two high-temperature reactors (AVR and THTR) operated in Germany have produced about 1 million spent fuel elements. The nuclear fuel in these reactors consists mainly of thorium-uranium mixed oxides, but also pure uranium dioxide and carbide fuels were tested. One of the possible solutions of utilising spent HTR fuel is the direct disposal in deep geological formations. Under such circumstances, the properties of fuel kernels, and especially their leaching behaviour in aqueous phases, have to be investigated for safety assessments of the final repository. In the present work, unirradiated ThO 2 , (Th 0.906 ,U 0.094 )O 2 , (Th 0.834 ,U 0.166 )O 2 and UO 2 fuel kernels were investigated. The composition, crystal structure and surface of the kernels were investigated by traditional methods. Furthermore, a new method was developed for testing the mechanical properties of ceramic kernels. The method was successfully used for the examination of mechanical properties of oxide kernels and for monitoring their evolution during contact with aqueous phases. The leaching behaviour of thoria-based oxide kernels and powders was investigated in repository-relevant salt solutions, as well as in artificial leachates. The influence of different experimental parameters on the kernel leaching stability was investigated. It was shown that thoria-based fuel kernels possess high chemical stability and are indifferent to presence of oxidative and radiolytic species in solution. The dissolution rate of thoria-based materials is typically several orders of magnitude lower than of conventional UO 2 fuel kernels. The life time of a single intact (Th,U)O 2 kernel under aggressive conditions of salt repository was estimated as about hundred thousand years. The importance of grain boundary quality on the leaching stability was demonstrated. Numerical Monte Carlo simulations were performed in order to explain the results of leaching experiments. (orig.)

  11. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  12. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  13. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  14. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  15. The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice.

    Science.gov (United States)

    Liu, Jie; Huang, Juan; Guo, Huan; Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Zhang, Xuehai; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhou, Yang; Li, Xiang; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen; Li, Qing; Yan, Jianbing

    2017-10-01

    Maize ( Zea mays ) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice ( Oryza sativa ) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1 , a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis ( Arabidopsis thaliana ) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1 ). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. © 2017 American Society of Plant Biologists. All Rights Reserved.

  16. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  17. Process for producing metal oxide kernels and kernels so obtained

    International Nuclear Information System (INIS)

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  18. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  19. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  20. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  1. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  2. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  3. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  4. Evolution kernel for the Dirac field

    International Nuclear Information System (INIS)

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  5. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  6. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  7. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  8. Multiple kernel SVR based on the MRE for remote sensing water depth fusion detection

    Science.gov (United States)

    Wang, Jinjin; Ma, Yi; Zhang, Jingyu

    2018-03-01

    Remote sensing has an important means of water depth detection in coastal shallow waters and reefs. Support vector regression (SVR) is a machine learning method which is widely used in data regression. In this paper, SVR is used to remote sensing multispectral bathymetry. Aiming at the problem that the single-kernel SVR method has a large error in shallow water depth inversion, the mean relative error (MRE) of different water depth is retrieved as a decision fusion factor with single kernel SVR method, a multi kernel SVR fusion method based on the MRE is put forward. And taking the North Island of the Xisha Islands in China as an experimentation area, the comparison experiments with the single kernel SVR method and the traditional multi-bands bathymetric method are carried out. The results show that: 1) In range of 0 to 25 meters, the mean absolute error(MAE)of the multi kernel SVR fusion method is 1.5m,the MRE is 13.2%; 2) Compared to the 4 single kernel SVR method, the MRE of the fusion method reduced 1.2% (1.9%) 3.4% (1.8%), and compared to traditional multi-bands method, the MRE reduced 1.9%; 3) In 0-5m depth section, compared to the single kernel method and the multi-bands method, the MRE of fusion method reduced 13.5% to 44.4%, and the distribution of points is more concentrated relative to y=x.

  9. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  10. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2016-01-01

    Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.

  11. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Science.gov (United States)

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  12. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    Science.gov (United States)

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  14. Bayesian Kernel Mixtures for Counts.

    Science.gov (United States)

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  15. Hardness variability in commercial technologies

    International Nuclear Information System (INIS)

    Shaneyfelt, M.R.; Winokur, P.S.; Meisenheimer, T.L.; Sexton, F.W.; Roeske, S.B.; Knoll, M.G.

    1994-01-01

    The radiation hardness of commercial Floating Gate 256K E 2 PROMs from a single diffusion lot was observed to vary between 5 to 25 krad(Si) when irradiated at a low dose rate of 64 mrad(Si)/s. Additional variations in E 2 PROM hardness were found to depend on bias condition and failure mode (i.e., inability to read or write the memory), as well as the foundry at which the part was manufactured. This variability is related to system requirements, and it is shown that hardness level and variability affect the allowable mode of operation for E 2 PROMs in space applications. The radiation hardness of commercial 1-Mbit CMOS SRAMs from Micron, Hitachi, and Sony irradiated at 147 rad(Si)/s was approximately 12, 13, and 19 krad(Si), respectively. These failure levels appear to be related to increases in leakage current during irradiation. Hardness of SRAMs from each manufacturer varied by less than 20%, but differences between manufacturers are significant. The Qualified Manufacturer's List approach to radiation hardness assurance is suggested as a way to reduce variability and to improve the hardness level of commercial technologies

  16. The influence of soft kernel texture on the flour, water absorption, rheology, and baking quality of durum wheat

    Science.gov (United States)

    Durum (T. turgidum subsp. durum) wheat production worldwide is substantially less than that of common wheat (Triticum aestivum). Durum kernels are extremely hard; leading to most durum wheat being milled into semolina. Durum wheat production is limited in part due to the relatively limited end-user ...

  17. Approximate N3LO Higgs-boson production cross section using physical-kernel constraints

    International Nuclear Information System (INIS)

    Florian, D. de; Moch, S.; Hamburg Univ.; Vogt, A.

    2014-08-01

    The single-logarithmic enhancement of the physical kernel for Higgs production by gluon-gluon fusion in the heavy top-quark limit is employed to derive the leading so far unknown contributions, ln 5,4,3 (1-z), to the N 3 LO coefficient function in the threshold expansion. Also using knowledge from Higgs-exchange DIS to estimate the remaining terms not vanishing for z=m 2 H /s→1, these results are combined with the recently completed soft+virtual contributions to provide an uncertainty band for the complete N 3 LO correction. For the 2008 MSTW parton distributions these N 3 LO contributions increase the cross section at 14 TeV by (10±2)% and (3±2.5)% for the standard choices μ R =m H and μ R =m H /2 of the renormalization scale. The remaining uncertainty arising from the hard-scattering cross sections can be quantified as no more than 5%, which is smaller than that due to the strong coupling and the parton distributions.

  18. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  19. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  20. A Visual Approach to Investigating Shared and Global Memory Behavior of CUDA Kernels

    KAUST Repository

    Rosen, Paul

    2013-06-01

    We present an approach to investigate the memory behavior of a parallel kernel executing on thousands of threads simultaneously within the CUDA architecture. Our top-down approach allows for quickly identifying any significant differences between the execution of the many blocks and warps. As interesting warps are identified, we allow further investigation of memory behavior by visualizing the shared memory bank conflicts and global memory coalescence, first with an overview of a single warp with many operations and, subsequently, with a detailed view of a single warp and a single operation. We demonstrate the strength of our approach in the context of a parallel matrix transpose kernel and a parallel 1D Haar Wavelet transform kernel. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  1. A Visual Approach to Investigating Shared and Global Memory Behavior of CUDA Kernels

    KAUST Repository

    Rosen, Paul

    2013-01-01

    We present an approach to investigate the memory behavior of a parallel kernel executing on thousands of threads simultaneously within the CUDA architecture. Our top-down approach allows for quickly identifying any significant differences between the execution of the many blocks and warps. As interesting warps are identified, we allow further investigation of memory behavior by visualizing the shared memory bank conflicts and global memory coalescence, first with an overview of a single warp with many operations and, subsequently, with a detailed view of a single warp and a single operation. We demonstrate the strength of our approach in the context of a parallel matrix transpose kernel and a parallel 1D Haar Wavelet transform kernel. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  2. Optimizing memory-bound SYMV kernel on GPU hardware accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2013-01-01

    Hardware accelerators are becoming ubiquitous high performance scientific computing. They are capable of delivering an unprecedented level of concurrent execution contexts. High-level programming language extensions (e.g., CUDA), profiling tools (e.g., PAPI-CUDA, CUDA Profiler) are paramount to improve productivity, while effectively exploiting the underlying hardware. We present an optimized numerical kernel for computing the symmetric matrix-vector product on nVidia Fermi GPUs. Due to its inherent memory-bound nature, this kernel is very critical in the tridiagonalization of a symmetric dense matrix, which is a preprocessing step to calculate the eigenpairs. Using a novel design to address the irregular memory accesses by hiding latency and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double precision arithmetics, respectively. © 2013 Springer-Verlag.

  3. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  4. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  5. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  6. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  7. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  8. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  9. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  10. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  11. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  12. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  13. IDENTIFICATION AND MAPPING OF A GENE FOR RICE SLENDER KERNEL USING Oryza glumaepatula INTROGRESSION LINES

    Directory of Open Access Journals (Sweden)

    Sobrizal Sobrizal

    2016-10-01

    Full Text Available World demand for superior rice grain quality tends to increase. One of the  criteria of appearance quality of rice grain is grain shape. Rice consumers  exhibit wide preferences for grain shape, but most Indonesian rice consumers prefer long and slender grain. The objectives of this study were to identify and map a gene for rice slender kernel trait using Oryza  glumaepatula introgression lines with O. sativa cv. Taichung 65 genetic background. A segregation analysis of BC4F2 population derived from backcrosses of a donor parent O. glumaepatula into a recurrent parent Taichung 65 showed that the slender kernel was controlled by a single recessive gene. This new identified gene was designated as sk1 (slender kernel 1. Moreover, based on the RFLP analyses using 14 RFLP markers located on chromosomes 2, 8, 9, and 10 in which the O. glumaepatula chromosomal segments were retained in BC4F2 population, the sk1 was located between RFLP markers C679 and C560 on the long arm of chromosome 2, with map distances of 2.8 and 1.5 cM, respectively. The wild rice O. glumaepatula carried a recessive allele for slender kernel. This allele may be useful in breeding of rice with slender kernel types. In addition, the development of plant materials and RFLP map associated with slender kernel in this study is the preliminary works in the effort to isolate this important grain shape gene.

  14. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    International Nuclear Information System (INIS)

    Chamberlain, S; French, S; Nazareth, D

    2016-01-01

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3 × 10"6 to 3 × 10"7); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10"6 was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10"6 have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.

  15. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  16. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  17. Ensemble Approach to Building Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...

  18. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  19. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  20. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  1. Theory of hard diffraction and rapidity gaps

    International Nuclear Information System (INIS)

    Del Duca, V.

    1995-06-01

    In this talk we review the models describing the hard diffractive production of jets or more generally high-mass states in presence of rapidity gaps in hadron-hadron and lepton-hadron collisions. By rapidity gaps we mean regions on the lego plot in (pseudo)-rapidity and azimuthal angle where no hadrons are produced, between the jet(s) and an elastically scattered hadron (single hard diffraction) or between two jets (double hard diffraction). (orig.)

  2. Theory of hard diffraction and rapidity gaps

    International Nuclear Information System (INIS)

    Del Duca, V.

    1996-01-01

    In this talk we review the models describing the hard diffractive production of jets or more generally high-mass states in presence of rapidity gaps in hadron-hadron and lepton-hadron collisions. By rapidity gaps we mean regions on the lego plot in (pseudo)-rapidity and azimuthal angle where no hadrons are produced, between the jet(s) and an elastically scattered hadron (single hard diffraction) or between two jets (double hard diffraction). copyright 1996 American Institute of Physics

  3. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  4. Single and combined effects of phosphate, silicate, and natural organic matter on arsenic removal from soft and hard groundwater using ferric chloride

    Science.gov (United States)

    Chanpiwat, Penradee; Hanh, Hoang Thi; Bang, Sunbaek; Kim, Kyoung-Woong

    2017-06-01

    In order to assess the effects of phosphate, silicate and natural organic matter (NOM) on arsenic removal by ferric chloride, batch coprecipitation experiments were conducted over a wide pH range using synthetic hard and soft groundwaters, similar to those found in northern Vietnam. The efficiency of arsenic removal from synthetic groundwater by coprecipitation with FeCl3 was remarkably decreased by the effects of PO4 3-, SiO4 4- and NOM. The negative effects of SiO4 4- and NOM on arsenic removal were not as strong as that of PO4 3-. Combining PO4 3- and SiO4 4- increased the negative effects on both arsenite (As3+) and arsenate (As5+) removal. The introduction of NOM into the synthetic groundwater containing both PO4 3- and SiO4 4- markedly magnified the negative effects on arsenic removal. In contrast, both Ca2+ and Mg2+ substantially increased the removal of As3+ at pH 8-12 and the removal of As5+ over the entire pH range. In the presence of Ca2+ and Mg2+, the interaction of NOM with Fe was either removed or the arsenic binding to Fe-NOM colloidal associations and/or dissolved complexes were flocculated. Removal of arsenic using coprecipitation by FeCl3 could not sufficiently reduce arsenic contents in the groundwater (350 μg/L) to meet the WHO guideline for drinking water (10 μg/L), especially when the arsenic-rich groundwater also contains co-occurring solutes such as PO4 3-, SiO4 4- and NOM; therefore, other remediation processes, such as membrane technology, should be introduced or additionally applied after this coprecipitation process, to ensure the safety of drinking water.

  5. Crystal growth and mechanical hardness of In{sub 2}Se{sub 2.7}Sb{sub 0.3} single crystal

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Piyush, E-mail: piyush-patel130@yahoo.com; Vyas, S. M., E-mail: s-m-vyas-gu@hotmail.com; Patel, Vimal; Pavagadhi, Himanshu [Department of Physics, School of Science, Gujarat University, Ahmedabad, Gujarat, India-380009 (India); Solanki, Mitesh [panditdindayal Petroleum University, Gandhinagar. Gujarat (India); Jani, Maunik P. [BITS Edu Campus, Varnama, Vadodara, Gujarat (India)

    2015-08-28

    The III-VI compound semiconductors is important for the fabrication of ionizing radiation detectors, solid-state electrodes, and photosensitive heterostructures, solar cell and ionic batteries. In this paper, In{sub 2}Se{sub 2.7} Sb{sub 0.3} single crystals were grown by the Bridgman method with temperature gradient of 60 °C/cm and the growth velocity 0.5cm/hr. The as-grown crystals were examined under the optical microscope for surface study, a various growth features observed on top free surface of the single crystal which is predominant of layers growth mechanism. The lattice parameters of as-grown crystal was determined by the XRD analysis. A Vickers’ projection microscope were used for the study of microhardness on the as-cleaved, cold-worked and annealed samples of the crystals, the results were discussed, and reported in detail.

  6. Calorimeter triggers for hard collisions

    International Nuclear Information System (INIS)

    Landshoff, P.V.; Polkinghorne, J.C.

    1978-01-01

    We discuss the use of a forward calorimeter to trigger on hard hadron-hadron collisions. We give a derivation in the covariant parton model of the Ochs-Stodolsky scaling law for single-hard-scattering processes, and investigate the conditions when instead a multiple- scattering mechanism might dominate. With a proton beam, this mechanism results in six transverse jets, with a total average multiplicity about twice that seen in ordinary events. We estimate that its cross section is likely to be experimentally accessible at avalues of the beam energy in the region of 100 GeV/c

  7. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  8. Radiation hardness tests and characterization of the CLARO-CMOS, a low power and fast single-photon counting ASIC in 0.35 micron CMOS technology

    International Nuclear Information System (INIS)

    Fiorini, M.; Andreotti, M.; Baldini, W.; Calabrese, R.; Carniti, P.; Cassina, L.; Cotta Ramusino, A.; Giachero, A.; Gotti, C.; Luppi, E.; Maino, M.; Malaguti, R.; Pessina, G.; Tomassetti, L.

    2014-01-01

    The CLARO-CMOS is a prototype ASIC that allows fast photon counting with 5 ns peaking time, a recovery time to baseline smaller than 25 ns, and a power consumption of less than 1 mW per channel. This chip is capable of single-photon counting with multi-anode photomultipliers and finds applications also in the read-out of silicon photomultipliers and microchannel plates. The prototype is realized in AMS 0.35 micron CMOS technology. In the LHCb RICH environment, assuming 10 years of operation at the nominal luminosity expected after the upgrade in Long Shutdown 2 (LS2), the ASIC must withstand a total fluence of about 6×10 12 1 MeV n eq /cm 2 and a total ionizing dose of 400 krad. A systematic evaluation of the radiation effects on the CLARO-CMOS performance is therefore crucial to ensure long term stability of the electronics front-end. The results of multi-step irradiation tests with neutrons and X-rays up to the fluence of 10 14 cm −2 and a dose of 4 Mrad, respectively, are presented, including measurement of single event effects during irradiation and chip performance evaluation before and after each irradiation step. - Highlights: • CLARO chip capable of single-photon counting with 5 ns peaking time. • Chip irradiated up to very high neutron, proton and X-rays fluences, as expected for upgraded LHCb RICH detectors. • No significant performance degradation is observed after irradiation

  9. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo

    2016-01-01

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970

  10. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Directory of Open Access Journals (Sweden)

    Jaime Cuevas

    2017-01-01

    Full Text Available The phenomenon of genotype × environment (G × E interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects ( u that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP and Gaussian (Gaussian kernel, GK. The other model has the same genetic component as the first model ( u plus an extra component, f, that captures random effects between environments that were not captured by the random effects u . We used five CIMMYT data sets (one maize and four wheat that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u   and   f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u .

  11. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo

    2017-01-05

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.

  12. Aflatoxin contamination of developing corn kernels.

    Science.gov (United States)

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  13. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  14. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  15. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  16. Option Valuation with Volatility Components, Fat Tails, and Nonlinear Pricing Kernels

    DEFF Research Database (Denmark)

    Babaoglu, Kadir Gokhan; Christoffersen, Peter; Heston, Steven

    We nest multiple volatility components, fat tails and a U-shaped pricing kernel in a single option model and compare their contribution to describing returns and option data. All three features lead to statistically significant model improvements. A second volatility factor is economically most i...

  17. Eucalyptus-Palm Kernel Oil Blends: A Complete Elimination of Diesel in a 4-Stroke VCR Diesel Engine

    Directory of Open Access Journals (Sweden)

    Srinivas Kommana

    2015-01-01

    Full Text Available Fuels derived from biomass are mostly preferred as alternative fuels for IC engines as they are abundantly available and renewable in nature. The objective of the study is to identify the parameters that influence gross indicated fuel conversion efficiency and how they are affected by the use of biodiesel relative to petroleum diesel. Important physicochemical properties of palm kernel oil and eucalyptus blend were experimentally evaluated and found within acceptable limits of relevant standards. As most of vegetable oils are edible, growing concern for trying nonedible and waste fats as alternative to petrodiesel has emerged. In present study diesel fuel is completely replaced by biofuels, namely, methyl ester of palm kernel oil and eucalyptus oil in various blends. Different blends of palm kernel oil and eucalyptus oil are prepared on volume basis and used as operating fuel in single cylinder 4-stroke variable compression ratio diesel engine. Performance and emission characteristics of these blends are studied by varying the compression ratio. In the present experiment methyl ester extracted from palm kernel oil is considered as ignition improver and eucalyptus oil is considered as the fuel. The blends taken are PKE05 (palm kernel oil 95 + eucalyptus 05, PKE10 (palm kernel oil 90 + eucalyptus 10, and PKE15 (palm kernel 85 + eucalyptus 15. The results obtained by operating with these fuels are compared with results of pure diesel; finally the most preferable combination and the preferred compression ratio are identified.

  18. Evaluation of palm kernel fibers (PKFs for production of asbestos-free automotive brake pads

    Directory of Open Access Journals (Sweden)

    K.K. Ikpambese

    2016-01-01

    Full Text Available In this study, asbestos-free automotive brake pads produced from palm kernel fibers with epoxy-resin binder was evaluated. Resins varied in formulations and properties such as friction coefficient, wear rate, hardness test, porosity, noise level, temperature, specific gravity, stopping time, moisture effects, surface roughness, oil and water absorptions rates, and microstructure examination were investigated. Other basic engineering properties of mechanical overload, thermal deformation fading behaviour shear strength, cracking resistance, over-heat recovery, and effect on rotor disc, caliper pressure, pad grip effect and pad dusting effect were also investigated. The results obtained indicated that the wear rate, coefficient of friction, noise level, temperature, and stopping time of the produced brake pads increased as the speed increases. The results also show that porosity, hardness, moisture content, specific gravity, surface roughness, and oil and water absorption rates remained constant with increase in speed. The result of microstructure examination revealed that worm surfaces were characterized by abrasion wear where the asperities were ploughed thereby exposing the white region of palm kernel fibers, thus increasing the smoothness of the friction materials. Sample S6 with composition of 40% epoxy-resin, 10% palm wastes, 6% Al2O3, 29% graphite, and 15% calcium carbonate gave better properties. The result indicated that palm kernel fibers can be effectively used as a replacement for asbestos in brake pad production.

  19. Reproducing kernel method with Taylor expansion for linear Volterra integro-differential equations

    Directory of Open Access Journals (Sweden)

    Azizallah Alvandi

    2017-06-01

    Full Text Available This research aims of the present a new and single algorithm for linear integro-differential equations (LIDE. To apply the reproducing Hilbert kernel method, there is made an equivalent transformation by using Taylor series for solving LIDEs. Shown in series form is the analytical solution in the reproducing kernel space and the approximate solution $ u_{N} $ is constructed by truncating the series to $ N $ terms. It is easy to prove the convergence of $ u_{N} $ to the analytical solution. The numerical solutions from the proposed method indicate that this approach can be implemented easily which shows attractive features.

  20. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  1. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  2. Fluidization calculation on nuclear fuel kernel coating

    International Nuclear Information System (INIS)

    Sukarsono; Wardaya; Indra-Suryawan

    1996-01-01

    The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program

  3. Method for calculating anisotropic neutron transport using scattering kernel without polynomial expansion

    International Nuclear Information System (INIS)

    Takahashi, Akito; Yamamoto, Junji; Ebisuya, Mituo; Sumita, Kenji

    1979-01-01

    A new method for calculating the anisotropic neutron transport is proposed for the angular spectral analysis of D-T fusion reactor neutronics. The method is based on the transport equation with new type of anisotropic scattering kernels formulated by a single function I sub(i) (μ', μ) instead of polynomial expansion, for instance, Legendre polynomials. In the calculation of angular flux spectra by using scattering kernels with the Legendre polynomial expansion, we often observe the oscillation with negative flux. But in principle this oscillation disappears by this new method. In this work, we discussed anisotropic scattering kernels of the elastic scattering and the inelastic scatterings which excite discrete energy levels. The other scatterings were included in isotropic scattering kernels. An approximation method, with use of the first collision source written by the I sub(i) (μ', μ) function, was introduced to attenuate the ''oscillations'' when we are obliged to use the scattering kernels with the Legendre polynomial expansion. Calculated results with this approximation showed remarkable improvement for the analysis of the angular flux spectra in a slab system of lithium metal with the D-T neutron source. (author)

  4. QTL Mapping of Kernel Number-Related Traits and Validation of One Major QTL for Ear Length in Maize.

    Science.gov (United States)

    Huo, Dongao; Ning, Qiang; Shen, Xiaomeng; Liu, Lei; Zhang, Zuxin

    2016-01-01

    The kernel number is a grain yield component and an important maize breeding goal. Ear length, kernel number per row and ear row number are highly correlated with the kernel number per ear, which eventually determines the ear weight and grain yield. In this study, two sets of F2:3 families developed from two bi-parental crosses sharing one inbred line were used to identify quantitative trait loci (QTL) for four kernel number-related traits: ear length, kernel number per row, ear row number and ear weight. A total of 39 QTLs for the four traits were identified in the two populations. The phenotypic variance explained by a single QTL ranged from 0.4% to 29.5%. Additionally, 14 overlapping QTLs formed 5 QTL clusters on chromosomes 1, 4, 5, 7, and 10. Intriguingly, six QTLs for ear length and kernel number per row overlapped in a region on chromosome 1. This region was designated qEL1.10 and was validated as being simultaneously responsible for ear length, kernel number per row and ear weight in a near isogenic line-derived population, suggesting that qEL1.10 was a pleiotropic QTL with large effects. Furthermore, the performance of hybrids generated by crossing 6 elite inbred lines with two near isogenic lines at qEL1.10 showed the breeding value of qEL1.10 for the improvement of the kernel number and grain yield of maize hybrids. This study provides a basis for further fine mapping, molecular marker-aided breeding and functional studies of kernel number-related traits in maize.

  5. Soft Sensor of Vehicle State Estimation Based on the Kernel Principal Component and Improved Neural Network

    Directory of Open Access Journals (Sweden)

    Haorui Liu

    2016-01-01

    Full Text Available In the car control systems, it is hard to measure some key vehicle states directly and accurately when running on the road and the cost of the measurement is high as well. To address these problems, a vehicle state estimation method based on the kernel principal component analysis and the improved Elman neural network is proposed. Combining with nonlinear vehicle model of three degrees of freedom (3 DOF, longitudinal, lateral, and yaw motion, this paper applies the method to the soft sensor of the vehicle states. The simulation results of the double lane change tested by Matlab/SIMULINK cosimulation prove the KPCA-IENN algorithm (kernel principal component algorithm and improved Elman neural network to be quick and precise when tracking the vehicle states within the nonlinear area. This algorithm method can meet the software performance requirements of the vehicle states estimation in precision, tracking speed, noise suppression, and other aspects.

  6. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  7. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  8. Influence of differently processed mango seed kernel meal on ...

    African Journals Online (AJOL)

    Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.

  9. On methods to increase the security of the Linux kernel

    International Nuclear Information System (INIS)

    Matvejchikov, I.V.

    2014-01-01

    Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru

  10. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  11. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  12. CMS results on hard diffraction

    CERN Document Server

    INSPIRE-00107098

    2013-01-01

    In these proceedings we present CMS results on hard diffraction. Diffractive dijet production in pp collisions at $\\sqrt{s}$=7 TeV is discussed. The cross section for dijet production is presented as a function of $\\tilde{\\xi}$, representing the fractional momentum loss of the scattered proton in single-diffractive events. The observation of W and Z boson production in events with a large pseudo-rapidity gap is also presented.

  13. Double hard scattering without double counting

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Gaunt, Jonathan R. [VU Univ. Amsterdam (Netherlands). NIKHEF Theory Group; Schoenwald, Kay [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2017-02-15

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  14. Double hard scattering without double counting

    International Nuclear Information System (INIS)

    Diehl, Markus; Gaunt, Jonathan R.

    2017-02-01

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  15. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  16. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  17. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  18. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  19. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  20. Quantum logic in dagger kernel categories

    NARCIS (Netherlands)

    Heunen, C.; Jacobs, B.P.F.

    2009-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  1. Quantum logic in dagger kernel categories

    NARCIS (Netherlands)

    Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P.

    2011-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  2. Symbol recognition with kernel density matching.

    Science.gov (United States)

    Zhang, Wan; Wenyin, Liu; Zhang, Kun

    2006-12-01

    We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

  3. Reproducing kernel Hilbert spaces of Gaussian priors

    NARCIS (Netherlands)

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  4. A synthesis of empirical plant dispersal kernels

    Czech Academy of Sciences Publication Activity Database

    Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.

    2017-01-01

    Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016

  5. Analytic continuation of weighted Bergman kernels

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2010-01-01

    Roč. 94, č. 6 (2010), s. 622-650 ISSN 0021-7824 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * analytic continuation * Toeplitz operator Subject RIV: BA - General Mathematics Impact factor: 1.450, year: 2010 http://www.sciencedirect.com/science/article/pii/S0021782410000942

  6. On convergence of kernel learning estimators

    NARCIS (Netherlands)

    Norkin, V.I.; Keyzer, M.A.

    2009-01-01

    The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability

  7. Analytic properties of the Virasoro modular kernel

    Energy Technology Data Exchange (ETDEWEB)

    Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)

    2017-06-15

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)

  8. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  9. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  10. Scattering kernels and cross sections working group

    International Nuclear Information System (INIS)

    Russell, G.; MacFarlane, B.; Brun, T.

    1998-01-01

    Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics

  11. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  12. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    African Journals Online (AJOL)

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  13. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  14. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  15. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  16. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  17. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  18. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  19. Prediction of protein subcellular localization using support vector machine with the choice of proper kernel

    Directory of Open Access Journals (Sweden)

    Al Mehedi Hasan

    2017-07-01

    subcellular localization prediction to find out which kernel is the best for SVM. We have evaluated our system on a combined dataset containing 5447 single-localized proteins (originally published as part of the Höglund dataset and 3056 multi-localized proteins (originally published as part of the DBMLoc set. This dataset was used by Briesemeister et al. in their extensive comparison of multilocalization prediction system. The experimental results indicate that the system based on SVM with the Laplace kernel, termed LKLoc, not only achieves a higher accuracy than the system using other kernels but also shows significantly better results than those obtained from other top systems (MDLoc, BNCs, YLoc+. The source code of this prediction system is available upon request.

  20. Learning a peptide-protein binding affinity predictor with kernel ridge regression

    Science.gov (United States)

    2013-01-01

    Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting

  1. Quasi-Dual-Packed-Kerneled Au49 (2,4-DMBT)27 Nanoclusters and the Influence of Kernel Packing on the Electrochemical Gap.

    Science.gov (United States)

    Liao, Lingwen; Zhuang, Shengli; Wang, Pu; Xu, Yanan; Yan, Nan; Dong, Hongwei; Wang, Chengming; Zhao, Yan; Xia, Nan; Li, Jin; Deng, Haiteng; Pei, Yong; Tian, Shi-Kai; Wu, Zhikun

    2017-10-02

    Although face-centered cubic (fcc), body-centered cubic (bcc), hexagonal close-packed (hcp), and other structured gold nanoclusters have been reported, it was unclear whether gold nanoclusters with mix-packed (fcc and non-fcc) kernels exist, and the correlation between kernel packing and the properties of gold nanoclusters is unknown. A Au 49 (2,4-DMBT) 27 nanocluster with a shell electron count of 22 has now been been synthesized and structurally resolved by single-crystal X-ray crystallography, which revealed that Au 49 (2,4-DMBT) 27 contains a unique Au 34 kernel consisting of one quasi-fcc-structured Au 21 and one non-fcc-structured Au 13 unit (where 2,4-DMBTH=2,4-dimethylbenzenethiol). Further experiments revealed that the kernel packing greatly influences the electrochemical gap (EG) and the fcc structure has a larger EG than the investigated non-fcc structure. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  3. Landslide Susceptibility Mapping Based on Particle Swarm Optimization of Multiple Kernel Relevance Vector Machines: Case of a Low Hill Area in Sichuan Province, China

    Directory of Open Access Journals (Sweden)

    Yongliang Lin

    2016-10-01

    Full Text Available In this paper, we propose a multiple kernel relevance vector machine (RVM method based on the adaptive cloud particle swarm optimization (PSO algorithm to map landslide susceptibility in the low hill area of Sichuan Province, China. In the multi-kernel structure, the kernel selection problem can be solved by adjusting the kernel weight, which determines the single kernel contribution of the final kernel mapping. The weights and parameters of the multi-kernel function were optimized using the PSO algorithm. In addition, the convergence speed of the PSO algorithm was increased using cloud theory. To ensure the stability of the prediction model, the result of a five-fold cross-validation method was used as the fitness of the PSO algorithm. To verify the results, receiver operating characteristic curves (ROC and landslide dot density (LDD were used. The results show that the model that used a heterogeneous kernel (a combination of two different kernel functions had a larger area under the ROC curve (0.7616 and a lower prediction error ratio (0.28% than did the other types of kernel models employed in this study. In addition, both the sum of two high susceptibility zone LDDs (6.71/100 km2 and the sum of two low susceptibility zone LDDs (0.82/100 km2 demonstrated that the landslide susceptibility map based on the heterogeneous kernel model was closest to the historical landslide distribution. In conclusion, the results obtained in this study can provide very useful information for disaster prevention and land-use planning in the study area.

  4. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  5. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  6. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  7. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  8. Combined Kernel-Based BDT-SMO Classification of Hyperspectral Fused Images

    Directory of Open Access Journals (Sweden)

    Fenghua Huang

    2014-01-01

    Full Text Available To solve the poor generalization and flexibility problems that single kernel SVM classifiers have while classifying combined spectral and spatial features, this paper proposed a solution to improve the classification accuracy and efficiency of hyperspectral fused images: (1 different radial basis kernel functions (RBFs are employed for spectral and textural features, and a new combined radial basis kernel function (CRBF is proposed by combining them in a weighted manner; (2 the binary decision tree-based multiclass SMO (BDT-SMO is used in the classification of hyperspectral fused images; (3 experiments are carried out, where the single radial basis function- (SRBF- based BDT-SMO classifier and the CRBF-based BDT-SMO classifier are used, respectively, to classify the land usages of hyperspectral fused images, and genetic algorithms (GA are used to optimize the kernel parameters of the classifiers. The results show that, compared with SRBF, CRBF-based BDT-SMO classifiers display greater classification accuracy and efficiency.

  9. Kernel Tuning and Nonuniform Influence on Optical and Electrochemical Gaps of Bimetal Nanoclusters.

    Science.gov (United States)

    He, Lizhong; Yuan, Jinyun; Xia, Nan; Liao, Lingwen; Liu, Xu; Gan, Zibao; Wang, Chengming; Yang, Jinlong; Wu, Zhikun

    2018-03-14

    Fine tuning nanoparticles with atomic precision is exciting and challenging and is critical for tuning the properties, understanding the structure-property correlation and determining the practical applications of nanoparticles. Some ultrasmall thiolated metal nanoparticles (metal nanoclusters) have been shown to be precisely doped, and even the protecting staple metal atom could be precisely reduced. However, the precise addition or reduction of the kernel atom while the other metal atoms in the nanocluster remain the same has not been successful until now, to the best of our knowledge. Here, by carefully selecting the protecting ligand with adequate steric hindrance, we synthesized a novel nanocluster in which the kernel can be regarded as that formed by the addition of two silver atoms to both ends of the Pt@Ag 12 icosohedral kernel of the Ag 24 Pt(SR) 18 (SR: thiolate) nanocluster, as revealed by single crystal X-ray crystallography. Interestingly, compared with the previously reported Ag 24 Pt(SR) 18 nanocluster, the as-obtained novel bimetal nanocluster exhibits a similar absorption but a different electrochemical gap. One possible explanation for this result is that the kernel tuning does not essentially change the electronic structure, but obviously influences the charge on the Pt@Ag 12 kernel, as demonstrated by natural population analysis, thus possibly resulting in the large electrochemical gap difference between the two nanoclusters. This work not only provides a novel strategy to tune metal nanoclusters but also reveals that the kernel change does not necessarily alter the optical and electrochemical gaps in a uniform manner, which has important implications for the structure-property correlation of nanoparticles.

  10. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... Department of Computer Science and Engineering, Srinivasa Ramanujan Institute of Technology, Anantapur 515701, India; Department of Computer Science and Engineering, Rajeev Gandhi Memorial College of Engineering and Technology, Nandyal 518501, India; Department of Computer Science and ...

  11. Detection of Fusarium in single wheat kernels using spectral Imaging

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.; Waalwijk, C.; Young, I.T.

    2005-01-01

    Fusarium head blight (FHB) is a harmful fungal disease that occurs in small grains. Non-destructive detection of this disease is traditionally done using spectroscopy or image processing. In this paper the combination of these two in the form of spectral imaging is evaluated. Transmission spectral

  12. Single or paired increase of total alkalinity and hardness of water for cultivation of Nile tilapia juveniles, Oreochromis niloticus - doi: 10.4025/actascitechnol.v34i2.12003

    Directory of Open Access Journals (Sweden)

    Davi de Holanda Cavalcante

    2012-03-01

    Full Text Available The present work aimed at evaluating the effects of single or paired increase of water’s total alkalinity (TA and total hardness (TH on the performance of Nile tilapia juveniles’ growth and culture water quality. Twenty five 25-L outdoor polyethylene aquaria were used to hold experimental fish (0.82 ± 0.06 g; 10 fish per aquarium for 6 weeks. There were two conditions of TA (low or high and of TH (moderate or high in the culture water, obtained by the application of different salts (CaCO3, Na2CO3 and CaSO4 upon a previously acidified water, all at the same rate. Water quality and growth performance variables were observed in each replicate. The acidification of the supply water with HCl has resulted in significantly lower final body weight (p < 0.05. Except for the Na2CO3, growth performance of tilapia has improved significantly after CaCO3 liming or CaSO4 application (p < 0.05 and no significant difference was detected between these last two fish groups (p > 0.05. It was concluded that beyond a minimum level of TA (≥ 20 mg L-1 CaCO3 and TH (≥ 20 mg L-1 CaCO3, it is also important that fish culture waters have a TH/TA ratio higher than 1.

  13. Comprehensive hard materials

    CERN Document Server

    2014-01-01

    Comprehensive Hard Materials deals with the production, uses and properties of the carbides, nitrides and borides of these metals and those of titanium, as well as tools of ceramics, the superhard boron nitrides and diamond and related compounds. Articles include the technologies of powder production (including their precursor materials), milling, granulation, cold and hot compaction, sintering, hot isostatic pressing, hot-pressing, injection moulding, as well as on the coating technologies for refractory metals, hard metals and hard materials. The characterization, testing, quality assurance and applications are also covered. Comprehensive Hard Materials provides meaningful insights on materials at the leading edge of technology. It aids continued research and development of these materials and as such it is a critical information resource to academics and industry professionals facing the technological challenges of the future. Hard materials operate at the leading edge of technology, and continued res...

  14. Analytic scattering kernels for neutron thermalization studies

    International Nuclear Information System (INIS)

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  15. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  16. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  17. Wilson Dslash Kernel From Lattice QCD Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  18. A Kernel for Protein Secondary Structure Prediction

    OpenAIRE

    Guermeur , Yann; Lifchitz , Alain; Vert , Régis

    2004-01-01

    http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...

  19. Scalar contribution to the BFKL kernel

    International Nuclear Information System (INIS)

    Gerasimov, R. E.; Fadin, V. S.

    2010-01-01

    The contribution of scalar particles to the kernel of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation is calculated. A great cancellation between the virtual and real parts of this contribution, analogous to the cancellation in the quark contribution in QCD, is observed. The reason of this cancellation is discovered. This reason has a common nature for particles with any spin. Understanding of this reason permits to obtain the total contribution without the complicated calculations, which are necessary for finding separate pieces.

  20. Weighted Bergman Kernels for Logarithmic Weights

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2010-01-01

    Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/

  1. Heat kernels and zeta functions on fractals

    International Nuclear Information System (INIS)

    Dunne, Gerald V

    2012-01-01

    On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)

  2. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  3. Kernel methods and flexible inference for complex stochastic dynamics

    Science.gov (United States)

    Capobianco, Enrico

    2008-07-01

    Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.

  4. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  5. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  6. Sensitivity kernels for viscoelastic loading based on adjoint methods

    Science.gov (United States)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity

  7. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    Science.gov (United States)

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  8. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.

    Science.gov (United States)

    Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-06-07

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.

  9. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    Directory of Open Access Journals (Sweden)

    Massaine Bandeira e Sousa

    2017-06-01

    Full Text Available Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1 single-environment, main genotypic effect model (SM; (2 multi-environment, main genotypic effects model (MM; (3 multi-environment, single variance G×E deviation model (MDs; and (4 multi-environment, environment-specific variance G×E deviation model (MDe. Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB, and a nonlinear kernel Gaussian kernel (GK. The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets, having different numbers of maize hybrids evaluated in different environments for grain yield (GY, plant height (PH, and ear height (EH. Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied.

  10. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model

    Science.gov (United States)

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-01

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  11. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  12. Antioxidant properties and UPLC-MS/MS profiling of phenolics in jacquemont's hazelnut kernels (Corylus jacquemontii) and its byproducts from western Himalaya.

    Science.gov (United States)

    Kumar, Ashish; Kumar, Pawan; Koundal, Rajkesh; Agnihotri, Vijai K

    2016-09-01

    A rapid and selective analytical method was developed to simultaneously quantify seven polyphenolic compounds (gallic acid, catechin, epicatechin, quercetin, kaempferol, syringic acid and p-coumaric acid). 15 phenolics of diverse groups in 80 % ethanolic extracts of jacquemont's hazelnut ( Corylus jacquemontii ) kernels and its byproducts from western Himalaya using ultra-performance liquid chromatography coupled with tandem mass spectrometry (UPLC-MS/MS) were identified. The developed analytical method showed excellent linearity, repeatability and accuracy. Total phenols concentrations were found to be 4446, 1199 and 105 mg gallic acid equivalent (GAE)/Kg of dried extract for jacquemont's hazelnut skin, hard shell and kernels respectively. Antioxidant potential of defatted, raw jacquemont's hazelnut skin, hard shell and kernel extracts assessed by 2,2'-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt (ABTS), 2,2'-diphenyl-1-picrylhydrazyl (DPPH) methods were increased in a dose-dependent manner. The IC 50 values were observed as 23.12, 51.32, 136.46 and 45.73, 63.65, 169.30 μg/ml for jacquemont's hazelnut skin, hard shell, kernels by DPPH and ABTS assays, respectively. The high phenolic contents in jacquemont's hazelnut skin contributed towards their free radical scavenging capacities.

  13. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    Science.gov (United States)

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  14. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  15. PENGARUH PERLAKUAN PENDAHULUAN SEBELUM PENGERINGAN TERHADAP INSTAN JAGUNG MUDA [Effects of Pre-treatments Prior Drying on Young Corn Kernel Instant (YCKI

    Directory of Open Access Journals (Sweden)

    Marleni Limonu1

    2008-12-01

    Full Text Available The objective of this research was to study the effects of pre-gelatinization and freezing processes on physico-chemical characteristics of young corn kernel instant. The results showed that pre-gelatinization and slow freezing processes significantly affected bulk density, rehidration capacity, hardness and cooking time of young corn kernel instant. The study of water sorption isothermic showed that the product had a sigmoid curve. Based on this curve, shelf life of the product had been calculated. The YCKI waxy, YCKI Flint, and YCKI Sweet products packed in alufo had shelf life of 7.2, 12.1 and 13.8 months respectively.

  16. Induced spherococcoid hard wheat

    International Nuclear Information System (INIS)

    Yanev, Sh.

    1981-01-01

    A mutant has been obtained - a spheroccocoid line -through irradiation of hard wheat seed with fast neutrons. It is distinguished by semispherical glumes and smaller grain; the plants have low stem with erect leaves but with shorter spikes and with lesser number of spikelets than those of the initial cultivar. Good productive tillering and resistance to lodging contributed to 23.5% higher yield. The line was superior to the standard and the initial cultivars by 14.2% as regards protein content, and by up to 22.8% - as to flour gluten. It has been successfully used in hybridization producing high-yielding hard wheat lines resistant to lodging, with good technological and other indicators. The possibility stated is of obtaining a spherococcoid mutant in tetraploid (hard) wheat out of the D-genome as well as its being suited to hard wheat breeding to enhance protein content, resistance to lodging, etc. (author)

  17. Hard probes 2006 Asilomar

    CERN Multimedia

    2006-01-01

    "The second international conference on hard and electromagnetic probes of high-energy nuclear collisions was held June 9 to 16, 2006 at the Asilomar Conference grounds in Pacific Grove, California" (photo and 1/2 page)

  18. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  19. Sina and Sinb genes in triticale do not determine grain hardness contrary to their orthologs Pina and Pinb in wheat.

    Science.gov (United States)

    Gasparis, Sebastian; Orczyk, Waclaw; Nadolska-Orczyk, Anna

    2013-11-26

    Secaloindoline a (Sina) and secaloindoline b (Sinb) genes of hexaploid triticale (x Triticosecale Wittmack) are orthologs of puroindoline a (Pina) and puroindoline b (Pinb) in hexaploid wheat (Triticum aestivum L.). It has already been proven that RNA interference (RNAi)-based silencing of Pina and Pinb genes significantly decreased the puroindoline a and puroindoline b proteins in wheat and essentially increased grain hardness (J Exp Bot 62:4025-4036, 2011). The function of Sina and Sinb in triticale was tested by means of RNAi silencing and compared to wheat. Novel Sina and Sinb alleles in wild-type plants of cv. Wanad were identified and their expression profiles characterized. Alignment with wheat Pina-D1a and Pinb-D1a alleles showed 95% and 93.3% homology with Sina and Sinb coding sequences. Twenty transgenic lines transformed with two hpRNA silencing cassettes directed to silence Sina or Sinb were obtained by the Agrobacterium-mediated method. A significant decrease of expression of both Sin genes in segregating progeny of tested T1 lines was observed independent of the silencing cassette used. The silencing was transmitted to the T4 kernel generation. The relative transcript level was reduced by up to 99% in T3 progeny with the mean for the sublines being around 90%. Silencing of the Sin genes resulted in a substantial decrease of secaloindoline a and secaloindoline b content. The identity of SIN peptides was confirmed by mass spectrometry. The hardness index, measured by the SKCS (Single Kernel Characterization System) method, ranged from 22 to 56 in silent lines and from 37 to 49 in the control, and the mean values were insignificantly lower in the silent ones, proving increased softness. Additionally, the mean total seed protein content of silenced lines was about 6% lower compared with control lines. Correlation coefficients between hardness and transcript level were weakly positive. We documented that RNAi-based silencing of Sin genes resulted in

  20. Hard coal; Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Loo, Kai van de; Sitte, Andreas-Peter [Gesamtverband Steinkohle e.V., Herne (Germany)

    2013-04-01

    The year 2012 benefited from a growth of the consumption of hard coal at the national level as well as at the international level. Worldwide, the hard coal still is the number one energy source for power generation. This leads to an increasing demand for power plant coal. In this year, the conversion of hard coal into electricity also increases in this year. In contrast to this, the demand for coking coal as well as for coke of the steel industry is still declining depending on the market conditions. The enhanced utilization of coal for the domestic power generation is due to the reduction of the nuclear power from a relatively bad year for wind power as well as reduced import prices and low CO{sub 2} prices. Both justify a significant price advantage for coal in comparison to the utilisation of natural gas in power plants. This was mainly due to the price erosion of the inexpensive US coal which partly was replaced by the expansion of shale gas on the domestic market. As a result of this, the inexpensive US coal looked for an outlet for sales in Europe. The domestic hard coal has continued the process of adaptation and phase-out as scheduled. Two further hard coal mines were decommissioned in the year 2012. RAG Aktiengesellschaft (Herne, Federal Republic of Germany) running the hard coal mining in this country begins with the preparations for the activities after the time of mining.

  1. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  2. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method

    Science.gov (United States)

    2015-06-01

    implementation of the direct interaction called particle-to-particle kernel for a shared-memory single GPU device using the Compute Unified Device Architecture ...GPU-defined P2P kernel we developed using the Compute Unified Device Architecture (CUDA).9 A brief outline of the rest of this work follows. The...Employed The computing environment used for this work is a 64-node heterogeneous cluster consisting of 48 IBM dx360M4 nodes, each with one Intel Phi

  3. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  4. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    Science.gov (United States)

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  5. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  6. Analysis of Advanced Fuel Kernel Technology

    International Nuclear Information System (INIS)

    Oh, Seung Chul; Jeong, Kyung Chai; Kim, Yeon Ku; Kim, Young Min; Kim, Woong Ki; Lee, Young Woo; Cho, Moon Sung

    2010-03-01

    The reference fuel for prismatic reactor concepts is based on use of an LEU UCO TRISO fissile particle. This fuel form was selected in the early 1980s for large high-temperature gas-cooled reactor (HTGR) concepts using LEU, and the selection was reconfirmed for modular designs in the mid-1980s. Limited existing irradiation data on LEU UCO TRISO fuel indicate the need for a substantial improvement in performance with regard to in-pile gaseous fission product release. Existing accident testing data on LEU UCO TRISO fuel are extremely limited, but it is generally expected that performance would be similar to that of LEU UO 2 TRISO fuel if performance under irradiation were successfully improved. Initial HTGR fuel technology was based on carbide fuel forms. In the early 1980s, as HTGR technology was transitioning from high-enriched uranium (HEU) fuel to LEU fuel. An initial effort focused on LEU prismatic design for large HTGRs resulted in the selection of UCO kernels for the fissile particles and thorium oxide (ThO 2 ) for the fertile particles. The primary reason for selection of the UCO kernel over UO 2 was reduced CO pressure, allowing higher burnup for equivalent coating thicknesses and reduced potential for kernel migration, an important failure mechanism in earlier fuels. A subsequent assessment in the mid-1980s considering modular HTGR concepts again reached agreement on UCO for the fissile particle for a prismatic design. In the early 1990s, plant cost-reduction studies led to a decision to change the fertile material from thorium to natural uranium, primarily because of a lower long-term decay heat level for the natural uranium fissile particles. Ongoing economic optimization in combination with anticipated capabilities of the UCO particles resulted in peak fissile particle burnup projection of 26% FIMA in steam cycle and gas turbine concepts

  7. Learning Rotation for Kernel Correlation Filter

    KAUST Repository

    Hamdi, Abdullah

    2017-08-11

    Kernel Correlation Filters have shown a very promising scheme for visual tracking in terms of speed and accuracy on several benchmarks. However it suffers from problems that affect its performance like occlusion, rotation and scale change. This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate rotation from one frame to another and enhance the detection of KCF. Hence it gains boost in overall accuracy in many of OBT50 detest videos with minimal additional computation.

  8. Research of Performance Linux Kernel File Systems

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.

  9. Fixed kernel regression for voltammogram feature extraction

    International Nuclear Information System (INIS)

    Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N

    2009-01-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals

  10. Reciprocity relation for multichannel coupling kernels

    International Nuclear Information System (INIS)

    Cotanch, S.R.; Satchler, G.R.

    1981-01-01

    Assuming time-reversal invariance of the many-body Hamiltonian, it is proven that the kernels in a general coupled-channels formulation are symmetric, to within a specified spin-dependent phase, under the interchange of channel labels and coordinates. The theorem is valid for both Hermitian and suitably chosen non-Hermitian Hamiltonians which contain complex effective interactions. While of direct practical consequence for nuclear rearrangement reactions, the reciprocity relation is also appropriate for other areas of physics which involve coupled-channels analysis

  11. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.

  12. Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    DEFF Research Database (Denmark)

    Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.

    2013-01-01

    correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...

  13. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Sensory and instrumental texture assessment of roasted pistachio nut/kernel by partial least square (PLS) regression analysis: effect of roasting conditions.

    Science.gov (United States)

    Mohammadi Moghaddam, Toktam; Razavi, Seyed M A; Taghizadeh, Masoud; Sazgarnia, Ameneh

    2016-01-01

    Roasting is an important step in the processing of pistachio nuts. The effect of hot air roasting temperature (90, 120 and 150 °C), time (20, 35 and 50 min) and air velocity (0.5, 1.5 and 2.5 m/s) on textural and sensory characteristics of pistachio nuts and kernels were investigated. The results showed that increasing the roasting temperature decreased the fracture force (82-25.54 N), instrumental hardness (82.76-37.59 N), apparent modulus of elasticity (47-21.22 N/s), compressive energy (280.73-101.18 N.s) and increased amount of bitterness (1-2.5) and the hardness score (6-8.40) of pistachio kernels. Higher roasting time improved the flavor of samples. The results of the consumer test showed that the roasted pistachio kernels have good acceptability for flavor (score 5.83-8.40), color (score 7.20-8.40) and hardness (score 6-8.40) acceptance. Moreover, Partial Least Square (PLS) analysis of instrumental and sensory data provided important information for the correlation of objective and subjective properties. The univariate analysis showed that over 93.87 % of the variation in sensory hardness and almost 87 % of the variation in sensory acceptability could be explained by instrumental texture properties.

  15. The Kernel Estimation in Biosystems Engineering

    Directory of Open Access Journals (Sweden)

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  16. Aligning Biomolecular Networks Using Modular Graph Kernels

    Science.gov (United States)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  17. Formal truncations of connected kernel equations

    International Nuclear Information System (INIS)

    Dixon, R.M.

    1977-01-01

    The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems

  18. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  19. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  20. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  1. Hard scattering in γp interactions

    International Nuclear Information System (INIS)

    Ahmed, T.; Andreev, V.; Andrieu, B.

    1992-10-01

    We report on the investigation of the final state in interactions of quasi-real photons with protons. The data were taken with the H1 detector at the HERA ep collider. Evidence for hard interactions is seen in both single particle spectra and jet formation. The data can best be described by inclusion of resolved photon processes as predicted by QCD. (orig.)

  2. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  3. Soft and hard pomerons

    International Nuclear Information System (INIS)

    Maor, Uri; Tel Aviv Univ.

    1995-09-01

    The role of s-channel unitarity screening corrections, calculated in the eikonal approximation, is investigated for soft Pomeron exchange responsible for elastic and diffractive hadron scattering in the high energy limit. We examine the differences between our results and those obtained from the supercritical Pomeron-Regge model with no such corrections. It is shown that screening saturation is attained at different scales for different channels. We then proceed to discuss the new HERA data on hard (PQCD) Pomeron diffractive channels and discuss the relationship between the soft and hard Pomerons and the relevance of our analysis to this problem. (author). 18 refs, 9 figs, 1 tab

  4. Hard exclusive QCD processes

    Energy Technology Data Exchange (ETDEWEB)

    Kugler, W.

    2007-01-15

    Hard exclusive processes in high energy electron proton scattering offer the opportunity to get access to a new generation of parton distributions, the so-called generalized parton distributions (GPDs). This functions provide more detailed informations about the structure of the nucleon than the usual PDFs obtained from DIS. In this work we present a detailed analysis of exclusive processes, especially of hard exclusive meson production. We investigated the influence of exclusive produced mesons on the semi-inclusive production of mesons at fixed target experiments like HERMES. Further we give a detailed analysis of higher order corrections (NLO) for the exclusive production of mesons in a very broad range of kinematics. (orig.)

  5. Hard-hat day

    CERN Multimedia

    2003-01-01

    CERN will be organizing a special information day on Friday, 27th June, designed to promote the wearing of hard hats and ensure that they are worn correctly. A new prevention campaign will also be launched.The event will take place in the hall of the Main Building from 11.30 a.m. to 2.00 p.m., when you will be able to come and try on various models of hard hat, including some of the very latest innovative designs, ask questions and pass on any comments and suggestions.

  6. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  7. Replacement Value of Palm Kernel Meal for Maize on Carcass ...

    African Journals Online (AJOL)

    This study was conducted to evaluate the effect of replacing maize with palm kernel meal on nutrient composition, fatty acid profile and sensory qualities of the meat of turkeys fed the dietary treatments. Six dietary treatments were formulated using palm kernel meal to replace maize at 0, 20, 40, 60, 80 and 100 percent.

  8. Effect of Palm Kernel Cake Replacement and Enzyme ...

    African Journals Online (AJOL)

    A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...

  9. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  10. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan

    This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...

  11. Commutators of Integral Operators with Variable Kernels on Hardy ...

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 4. Commutators of Integral Operators with Variable Kernels on Hardy Spaces. Pu Zhang Kai Zhao. Volume 115 Issue 4 November 2005 pp 399-410 ... Keywords. Singular and fractional integrals; variable kernel; commutator; Hardy space.

  12. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  13. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...

  14. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  15. Design and construction of palm kernel cracking and separation ...

    African Journals Online (AJOL)

    Design and construction of palm kernel cracking and separation machines. ... Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Design and construction of palm kernel cracking and separation machines. JO Nordiana, K ...

  16. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...

  17. Genetic relationship between plant growth, shoot and kernel sizes in ...

    African Journals Online (AJOL)

    Maize (Zea mays L.) ear vascular tissue transports nutrients that contribute to grain yield. To assess kernel heritabilities that govern ear development and plant growth, field studies were conducted to determine the combining abilities of parents that differed for kernel-size, grain-filling rates and shoot-size. Thirty two hybrids ...

  18. A relationship between Gel'fand-Levitan and Marchenko kernels

    International Nuclear Information System (INIS)

    Kirst, T.; Von Geramb, H.V.; Amos, K.A.

    1989-01-01

    An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs

  19. Boundary singularity of Poisson and harmonic Bergman kernels

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2015-01-01

    Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170

  20. Oven-drying reduces ruminal starch degradation in maize kernels

    NARCIS (Netherlands)

    Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.

    2014-01-01

    The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels

  1. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  2. Resolvent kernel for the Kohn Laplacian on Heisenberg groups

    Directory of Open Access Journals (Sweden)

    Neur Eddine Askour

    2002-07-01

    Full Text Available We present a formula that relates the Kohn Laplacian on Heisenberg groups and the magnetic Laplacian. Then we obtain the resolvent kernel for the Kohn Laplacian and find its spectral density. We conclude by obtaining the Green kernel for fractional powers of the Kohn Laplacian.

  3. Reproducing Kernels and Coherent States on Julia Sets

    Energy Technology Data Exchange (ETDEWEB)

    Thirulogasanthar, K., E-mail: santhar@cs.concordia.ca; Krzyzak, A. [Concordia University, Department of Computer Science and Software Engineering (Canada)], E-mail: krzyzak@cs.concordia.ca; Honnouvo, G. [Concordia University, Department of Mathematics and Statistics (Canada)], E-mail: g_honnouvo@yahoo.fr

    2007-11-15

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems.

  4. Reproducing Kernels and Coherent States on Julia Sets

    International Nuclear Information System (INIS)

    Thirulogasanthar, K.; Krzyzak, A.; Honnouvo, G.

    2007-01-01

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems

  5. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  6. Comparison of Kernel Equating and Item Response Theory Equating Methods

    Science.gov (United States)

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  7. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  8. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    Science.gov (United States)

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  9. Computing an element in the lexicographic kernel of a game

    NARCIS (Netherlands)

    Faigle, U.; Kern, Walter; Kuipers, Jeroen

    The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a

  10. Computing an element in the lexicographic kernel of a game

    NARCIS (Netherlands)

    Faigle, U.; Kern, Walter; Kuipers, J.

    2002-01-01

    The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a

  11. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU

    Energy Technology Data Exchange (ETDEWEB)

    Chen Quan; Chen Mingli; Lu Weiguo [TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States)

    2011-03-15

    Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  12. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  13. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  14. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  15. Compactly Supported Basis Functions as Support Vector Kernels for Classification.

    Science.gov (United States)

    Wittek, Peter; Tan, Chew Lim

    2011-10-01

    Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.

  16. ADSORPTION OF COPPER FROM AQUEOUS SOLUTION BY ELAIS GUINEENSIS KERNEL ACTIVATED CARBON

    Directory of Open Access Journals (Sweden)

    NAJUA DELAILA TUMIN

    2008-08-01

    Full Text Available In this study, a series of batch laboratory experiments were conducted in order to investigate the feasibility of Elais Guineensis kernel or known as palm kernel shell (PKS-based activated carbon for the removal of copper from aqueous solution by the adsorption process. Investigation was carried out by studying the influence of initial solution pH, adsorbent dosage and initial concentration of copper. The particle size of PKS used was categorized as PKS–M. All batch experiments were carried out at a constant temperature of 30°C (±2°C using mechanical shaker that operated at 100 rpm. The single component equilibrium data was analyzed using Langmuir, Freundlich, Redlich-Peterson, Temkin and Toth adsorption isotherms.

  17. Higher-order predictions for splitting functions and coefficient functions from physical evolution kernels

    International Nuclear Information System (INIS)

    Vogt, A; Soar, G.; Vermaseren, J.A.M.

    2010-01-01

    We have studied the physical evolution kernels for nine non-singlet observables in deep-inelastic scattering (DIS), semi-inclusive e + e - annihilation and the Drell-Yan (DY) process, and for the flavour-singlet case of the photon- and heavy-top Higgs-exchange structure functions (F 2 , F φ ) in DIS. All known contributions to these kernels show an only single-logarithmic large-x enhancement at all powers of (1-x). Conjecturing that this behaviour persists to (all) higher orders, we have predicted the highest three (DY: two) double logarithms of the higher-order non-singlet coefficient functions and of the four-loop singlet splitting functions. The coefficient-function predictions can be written as exponentiations of 1/N-suppressed contributions in Mellin-N space which, however, are less predictive than the well-known exponentiation of the ln k N terms. (orig.)

  18. Performance and Emission of VCR-CI Engine with palm kernel and eucalyptus blends

    Directory of Open Access Journals (Sweden)

    Srinivas kommana

    2016-09-01

    Full Text Available This study aims at complete replacement of conventional diesel fuel by biodiesel. In order to achieve that, palm kernel oil and eucalyptus oil blend has been chosen. Eucalyptus oil was blended with methyl ester of palm kernel oil in 5%, 10% and 15% by volume. Tests were conducted with diesel fuel and blends on a 4 stroke VCR diesel engine for comparative analysis with 220 bar injection pressure and 19:1 compression ratio. All the test fuels were used in computerized 4 stroke single cylinder variable compression ratio engine at five different loads (0, 6, 12, 18 and 24 N m. Present investigation depicts the improved combustion and reduced emissions for the PKO85% + EuO15% blend when compared to diesel at full load conditions.

  19. Hard times; Schwere Zeiten

    Energy Technology Data Exchange (ETDEWEB)

    Grunwald, Markus

    2012-10-02

    The prices of silicon and solar wafers keep dropping. According to market research specialist IMS research, this is the result of weak traditional solar markets and global overcapacities. While many manufacturers are facing hard times, big producers of silicon are continuing to expand.

  20. Hardness of Clustering

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Hardness of Clustering. Both k-means and k-medians intractable (when n and d are both inputs even for k =2). The best known deterministic algorithms. are based on Voronoi partitioning that. takes about time. Need for approximation – “close” to optimal.

  1. Rock-hard coatings

    OpenAIRE

    Muller, M.

    2007-01-01

    Aircraft jet engines have to be able to withstand infernal conditions. Extreme heat and bitter cold tax coatings to the limit. Materials expert Dr Ir. Wim Sloof fits atoms together to develop rock-hard coatings. The latest invention in this field is known as ceramic matrix composites. Sloof has signed an agreement with a number of parties to investigate this material further.

  2. Rock-hard coatings

    NARCIS (Netherlands)

    Muller, M.

    2007-01-01

    Aircraft jet engines have to be able to withstand infernal conditions. Extreme heat and bitter cold tax coatings to the limit. Materials expert Dr Ir. Wim Sloof fits atoms together to develop rock-hard coatings. The latest invention in this field is known as ceramic matrix composites. Sloof has

  3. Hardness and excitation energy

    Indian Academy of Sciences (India)

    It is shown that the first excitation energy can be given by the Kohn-Sham hardness (i.e. the energy difference of the ground-state lowest unoccupied and highest occupied levels) plus an extra term coming from the partial derivative of the ensemble exchange-correlation energy with respect to the weighting factor in the ...

  4. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  5. A method for manufacturing kernels of metallic oxides and the thus obtained kernels

    International Nuclear Information System (INIS)

    Lelievre Bernard; Feugier, Andre.

    1973-01-01

    A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr

  6. Learning molecular energies using localized graph kernels

    Science.gov (United States)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  7. Novel Aspects of Hard Diffraction in QCD

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.

    2005-01-01

    Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, and nuclear shadowing and antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency

  8. Failure analysis and shock protection of external hard disk drive ...

    African Journals Online (AJOL)

    Technology for processing and storage of data in portable external storage hard disks has increasingly improved over the years. Currently, terabytes of data can be stored in one portable external storage hard disk drive. Storing such amount of data on a single disk on itself is a risk. Several instances of data lost by big ...

  9. Hard Copy Market Overview

    Science.gov (United States)

    Testan, Peter R.

    1987-04-01

    A number of Color Hard Copy (CHC) market drivers are currently indicating strong growth in the use of CHC technologies for the business graphics marketplace. These market drivers relate to product, software, color monitors and color copiers. The use of color in business graphics allows more information to be relayed than is normally the case in a monochrome format. The communicative powers of full-color computer generated output in the business graphics application area will continue to induce end users to desire and require color in their future applications. A number of color hard copy technologies will be utilized in the presentation graphics arena. Thermal transfer, ink jet, photographic and electrophotographic technologies are all expected to be utilized in the business graphics presentation application area in the future. Since the end of 1984, the availability of color application software packages has grown significantly. Sales revenue generated by business graphics software is expected to grow at a compound annual growth rate of just over 40 percent to 1990. Increased availability of packages to allow the integration of text and graphics is expected. Currently, the latest versions of page description languages such as Postscript, Interpress and DDL all support color output. The use of color monitors will also drive the demand for color hard copy in the business graphics market place. The availability of higher resolution screens is allowing color monitors to be easily used for both text and graphics applications in the office environment. During 1987, the sales of color monitors are expected to surpass the sales of monochrome monitors. Another major color hard copy market driver will be the color copier. In order to take advantage of the communications power of computer generated color output, multiple copies are required for distribution. Product introductions of a new generation of color copiers is now underway with additional introductions expected

  10. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  11. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  12. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  13. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  14. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  15. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Hard Electromagnetic Processes

    International Nuclear Information System (INIS)

    Richard, F.

    1987-09-01

    Among hard electromagnetic processes, I will use the most recent data and focus on quantitative test of QCD. More specifically, I will retain two items: - hadroproduction of direct photons, - Drell-Yan. In addition, I will briefly discuss a recent analysis of ISR data obtained with AFS (Axial Field Spectrometer) which sheds a new light on the e/π puzzle at low P T

  17. On weights which admit the reproducing kernel of Bergman type

    Directory of Open Access Journals (Sweden)

    Zbigniew Pasternak-Winiarski

    1992-01-01

    Full Text Available In this paper we consider (1 the weights of integration for which the reproducing kernel of the Bergman type can be defined, i.e., the admissible weights, and (2 the kernels defined by such weights. It is verified that the weighted Bergman kernel has the analogous properties as the classical one. We prove several sufficient conditions and necessary and sufficient conditions for a weight to be an admissible weight. We give also an example of a weight which is not of this class. As a positive example we consider the weight μ(z=(Imz2 defined on the unit disk in ℂ.

  18. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  19. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  20. Explicit signal to noise ratio in reproducing kernel Hilbert spaces

    DEFF Research Database (Denmark)

    Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo

    2011-01-01

    This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...

  1. Development of method for evaluating cell hardness and correlation between bacterial spore hardness and durability

    Directory of Open Access Journals (Sweden)

    Nakanishi Koichi

    2012-06-01

    Full Text Available Abstract Background Despite the availability of conventional devices for making single-cell manipulations, determining the hardness of a single cell remains difficult. Here, we consider the cell to be a linear elastic body and apply Young’s modulus (modulus of elasticity, which is defined as the ratio of the repulsive force (stress in response to the applied strain. In this new method, a scanning probe microscope (SPM is operated with a cantilever in the “contact-and-push” mode, and the cantilever is applied to the cell surface over a set distance (applied strain. Results We determined the hardness of the following bacterial cells: Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa, and five Bacillus spp. In log phase, these strains had a similar Young’s modulus, but Bacillus spp. spores were significantly harder than the corresponding vegetative cells. There was a positive, linear correlation between the hardness of bacterial spores and heat or ultraviolet (UV resistance. Conclusions Using this technique, the hardness of a single vegetative bacterial cell or spore could be determined based on Young’s modulus. As an application of this technique, we demonstrated that the hardness of individual bacterial spores was directly proportional to heat and UV resistance, which are the conventional measures of physical durability. This technique allows the rapid and direct determination of spore durability and provides a valuable and innovative method for the evaluation of physical properties in the field of microbiology.

  2. Analysis of heterosis and quantitative trait loci for kernel shape related traits using triple testcross population in maize.

    Directory of Open Access Journals (Sweden)

    Lu Jiang

    Full Text Available Kernel shape related traits (KSRTs have been shown to have important influences on grain yield. The previous studies that emphasize kernel length (KL and kernel width (KW lack a comprehensive evaluation of characters affecting kernel shape. In this study, materials of the basic generations (B73, Mo17, and B73 × Mo17, 82 intermated B73 × Mo17 (IBM individuals, and the corresponding triple testcross (TTC populations were used to evaluate heterosis, investigate correlations, and characterize the quantitative trait loci (QTL for six KSRTs: KL, KW, length to width ratio (LWR, perimeter length (PL, kernel area (KA, and circularity (CS. The results showed that the mid-parent heterosis (MPH for most of the KSRTs was moderate. The performance of KL, KW, PL, and KA exhibited significant positive correlation with heterozygosity but their Pearson's R values were low. Among KSRTs, the strongest significant correlation was found between PL and KA with R values was up to 0.964. In addition, KW, PL, KA, and CS were shown to be significant positive correlation with 100-kernel weight (HKW. 28 QTLs were detected for KSRTs in which nine were augmented additive, 13 were augmented dominant, and six were dominance × additive epistatic. The contribution of a single QTL to total phenotypic variation ranged from 2.1% to 32.9%. Furthermore, 19 additive × additive digenic epistatic interactions were detected for all KSRTs with the highest total R2 for KW (78.8%, and nine dominance × dominance digenic epistatic interactions detected for KL, LWR, and CS with the highest total R2 (55.3%. Among significant digenic interactions, most occurred between genomic regions not mapped with main-effect QTLs. These findings display the complexity of the genetic basis for KSRTs and enhance our understanding on heterosis of KSRTs from the quantitative genetic perspective.

  3. Single photon emission in electron-positron colliding beam reaction e/sup +/e/sup -/. -->. u/sup +/u/sup -/. [Total cross section, current conservation, hard-proton correction

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, M [Saitama Medical College (Japan)

    1974-01-01

    We evaluate the energy spectrum of the photons emitted in the reaction e/sup +/e/sup -/ ..-->.. ..mu../sup +/..mu../sup -/..gamma.., and the hard photon correction to the total cross-section of the reaction e/sup +/e/sup -/ ..-->.. ..mu../sup +/..mu../sup -/. We develop a simple technique based on the analytical QED formulae, in particular, on the current conservation.

  4. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  5. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  6. Validation of a dose-point kernel convolution technique for internal dosimetry

    International Nuclear Information System (INIS)

    Giap, H.B.; Macey, D.J.; Bayouth, J.E.; Boyer, A.L.

    1995-01-01

    The objective of this study was to validate a dose-point kernel convolution technique that provides a three-dimensional (3D) distribution of absorbed dose from a 3D distribution of the radionuclide 131 I. A dose-point kernel for the penetrating radiations was calculated by a Monte Carlo simulation and cast in a 3D rectangular matrix. This matrix was convolved with the 3D activity map furnished by quantitative single-photon-emission computed tomography (SPECT) to provide a 3D distribution of absorbed dose. The convolution calculation was performed using a 3D fast Fourier transform (FFT) technique, which takes less than 40 s for a 128 x 128 x 16 matrix on an Intel 486 DX2 (66 MHz) personal computer. The calculated photon absorbed dose was compared with values measured by thermoluminescent dosimeters (TLDS) inserted along the diameter of a 22 cm diameter annular source of 131 I. The mean and standard deviation of the percentage difference between the measurements and the calculations were equal to -1% and 3.6% respectively. This convolution method was also used to calculate the 3D dose distribution in an Alderson abdominal phantom containing a liver, a spleen, and a spherical tumour volume loaded with various concentrations of 131 I. By averaging the dose calculated throughout the liver, spleen, and tumour the dose-point kernel approach was compared with values derived using the MIRD formalism, and found to agree to better than 15%. (author)

  7. Ridge Regression and Other Kernels for Genomic Selection with R Package rrBLUP

    Directory of Open Access Journals (Sweden)

    Jeffrey B. Endelman

    2011-11-01

    Full Text Available Many important traits in plant breeding are polygenic and therefore recalcitrant to traditional marker-assisted selection. Genomic selection addresses this complexity by including all markers in the prediction model. A key method for the genomic prediction of breeding values is ridge regression (RR, which is equivalent to best linear unbiased prediction (BLUP when the genetic covariance between lines is proportional to their similarity in genotype space. This additive model can be broadened to include epistatic effects by using other kernels, such as the Gaussian, which represent inner products in a complex feature space. To facilitate the use of RR and nonadditive kernels in plant breeding, a new software package for R called rrBLUP has been developed. At its core is a fast maximum-likelihood algorithm for mixed models with a single variance component besides the residual error, which allows for efficient prediction with unreplicated training data. Use of the rrBLUP software is demonstrated through several examples, including the identification of optimal crosses based on superior progeny value. In cross-validation tests, the prediction accuracy with nonadditive kernels was significantly higher than RR for wheat ( L. grain yield but equivalent for several maize ( L. traits.

  8. Forecasting Crude Oil Price Using EEMD and RVM with Adaptive PSO-Based Kernels

    Directory of Open Access Journals (Sweden)

    Taiyong Li

    2016-12-01

    Full Text Available Crude oil, as one of the most important energy sources in the world, plays a crucial role in global economic events. An accurate prediction for crude oil price is an interesting and challenging task for enterprises, governments, investors, and researchers. To cope with this issue, in this paper, we proposed a method integrating ensemble empirical mode decomposition (EEMD, adaptive particle swarm optimization (APSO, and relevance vector machine (RVM—namely, EEMD-APSO-RVM—to predict crude oil price based on the “decomposition and ensemble” framework. Specifically, the raw time series of crude oil price were firstly decomposed into several intrinsic mode functions (IMFs and one residue by EEMD. Then, RVM with combined kernels was applied to predict target value for the residue and each IMF individually. To improve the prediction performance of each component, an extended particle swarm optimization (PSO was utilized to simultaneously optimize the weights and parameters of single kernels for the combined kernel of RVM. Finally, simple addition was used to aggregate all the predicted results of components into an ensemble result as the final result. Extensive experiments were conducted on the crude oil spot price of the West Texas Intermediate (WTI to illustrate and evaluate the proposed method. The experimental results are superior to those by several state-of-the-art benchmark methods in terms of root mean squared error (RMSE, mean absolute percent error (MAPE, and directional statistic (Dstat, showing that the proposed EEMD-APSO-RVM is promising for forecasting crude oil price.

  9. Verification of helical tomotherapy delivery using autoassociative kernel regression

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Garvey, Dustin R.; Wesley Hines, J.; Robison, Ben H.; Outten, Samuel S.

    2007-01-01

    Quality assurance (QA) is a topic of major concern in the field of intensity modulated radiation therapy (IMRT). The standard of practice for IMRT is to perform QA testing for individual patients to verify that the dose distribution will be delivered to the patient. The purpose of this study was to develop a new technique that could eventually be used to automatically evaluate helical tomotherapy treatments during delivery using exit detector data. This technique uses an autoassociative kernel regression (AAKR) model to detect errors in tomotherapy delivery. AAKR is a novel nonparametric model that is known to predict a group of correct sensor values when supplied a group of sensor values that is usually corrupted or contains faults such as machine failure. This modeling scheme is especially suited for the problem of monitoring the fluence values found in the exit detector data because it is able to learn the complex detector data relationships. This scheme still applies when detector data are summed over many frames with a low temporal resolution and a variable beam attenuation resulting from patient movement. Delivery sequences from three archived patients (prostate, lung, and head and neck) were used in this study. Each delivery sequence was modified by reducing the opening time for random individual multileaf collimator (MLC) leaves by random amounts. The error and error-free treatments were delivered with different phantoms in the path of the beam. Multiple autoassociative kernel regression (AAKR) models were developed and tested by the investigators using combinations of the stored exit detector data sets from each delivery. The models proved robust and were able to predict the correct or error-free values for a projection, which had a single MLC leaf decrease its opening time by less than 10 msec. The model also was able to determine machine output errors. The average uncertainty value for the unfaulted projections ranged from 0.4% to 1.8% of the detector

  10. Efficient Online Subspace Learning With an Indefinite Kernel for Visual Tracking and Recognition

    NARCIS (Netherlands)

    Liwicki, Stephan; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Pantic, Maja

    2012-01-01

    We propose an exact framework for online learning with a family of indefinite (not positive) kernels. As we study the case of nonpositive kernels, we first show how to extend kernel principal component analysis (KPCA) from a reproducing kernel Hilbert space to Krein space. We then formulate an

  11. Systematic approach in optimizing numerical memory-bound kernels on GPU

    KAUST Repository

    Abdelfattah, Ahmad

    2013-01-01

    The use of GPUs has been very beneficial in accelerating dense linear algebra computational kernels (DLA). Many high performance numerical libraries like CUBLAS, MAGMA, and CULA provide BLAS and LAPACK implementations on GPUs as well as hybrid computations involving both, CPUs and GPUs. GPUs usually score better performance than CPUs for compute-bound operations, especially those characterized by a regular data access pattern. This paper highlights a systematic approach for efficiently implementing memory-bound DLA kernels on GPUs, by taking advantage of the underlying device\\'s architecture (e.g., high throughput). This methodology proved to outperform existing state-of-the-art GPU implementations for the symmetric matrix-vector multiplication (SYMV), characterized by an irregular data access pattern, in a recent work (Abdelfattah et. al, VECPAR 2012). We propose to extend this methodology to the general matrix-vector multiplication (GEMV) kernel. The performance results show that our GEMV implementation achieves better performance for relatively small to medium matrix sizes, making it very influential in calculating the Hessenberg and bidiagonal reductions of general matrices (radar applications), which are the first step toward computing eigenvalues and singular values, respectively. Considering small and medium size matrices (≤4500), our GEMV kernel achieves an average 60% improvement in single precision (SP) and an average 25% in double precision (DP) over existing open-source and commercial software solutions. These results improve reduction algorithms for both small and large matrices. The improved GEMV performances engender an averge 30% (SP) and 15% (DP) in Hessenberg reduction and up to 25% (SP) and 14% (DP) improvement for the bidiagonal reduction over the implementation provided by CUBLAS 5.0. © 2013 Springer-Verlag.

  12. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    International Nuclear Information System (INIS)

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  13. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  14. Bioconversion of palm kernel meal for aquaculture: Experiences ...

    African Journals Online (AJOL)

    SERVER

    2008-04-17

    Apr 17, 2008 ... es as well as food supplies have existed traditionally with coastal regions of Liberia and ..... Contamination of palm kernel meal with Aspergillus ... Sciences, Universiti Sains Malaysia, Penang 11800, Malaysia. Aquacult. Res.

  15. The effect of apricot kernel flour incorporation on the ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-01-05

    Jan 5, 2009 ... 2Department of Food Engineering, Erciyes University 38039, Kayseri, Turkey. Accepted 27 ... Key words: Noodle; apricot kernel, flour, cooking, sensory properties. ... their simple preparation requirement, desirable sensory.

  16. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate

  17. Kernel-based noise filtering of neutron detector signals

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki

    2007-01-01

    This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests

  18. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  19. Resummed memory kernels in generalized system-bath master equations

    International Nuclear Information System (INIS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  20. On Improving Convergence Rates for Nonnegative Kernel Density Estimators

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1980-01-01

    To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...

  1. Improved Variable Window Kernel Estimates of Probability Densities

    OpenAIRE

    Hall, Peter; Hu, Tien Chung; Marron, J. S.

    1995-01-01

    Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...

  2. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1982-10-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The graphical method also leads to a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  3. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  4. The Flux OSKit: A Substrate for Kernel and Language Research

    Science.gov (United States)

    1997-10-01

    unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 tions. Our own microkernel -based OS, Fluke [17], puts almost all of the OSKit to use...kernels distance the language from the hardware; even microkernels and other extensible kernels enforce some default policy which often conflicts with a...be particu- larly useful in these research projects. 6.1.1 The Fluke OS In 1996 we developed an entirely new microkernel - based system called Fluke

  5. A Histological Study of Aspergillus flavus Colonization of Wound Inoculated Maize Kernels of Resistant and Susceptible Maize Hybrids in the Field

    Directory of Open Access Journals (Sweden)

    Gary L. Windham

    2018-04-01

    Full Text Available Aspergillus flavus colonization in developing kernels of maize single-cross hybrids resistant (Mp313E × Mp717 and susceptible (GA209 × T173 to aflatoxin accumulation was determined in the field over three growing seasons (2012–2014. Plants were hand pollinated, and individual kernels were inoculated with a needle dipped in a suspension of A. flavus conidia 21 days after pollination. Kernels were harvested at 1- to 2-day intervals from 1 to 21 days after inoculation (DAI. Kernels were placed in FAA fixative, dehydrated, embedded in paraffin, sectioned, and stained with toluidine blue. Kernels were also collected additional kernels for aflatoxin analyses in 2013 and 2014. At 2 DAI, A. flavus hyphae were observed among endosperm cells in the susceptible hybrid, but colonization of the endosperm in the resistant hybrid was limited to the wound site of the resistant hybrid. Sections of the scutellum of the susceptible hybrid were colonized by A. flavus by 5 DAI. Fungal growth was slower in the resistant hybrid compared to the susceptible hybrid. By 10 DAI, A. flavus had colonized a large section of the embryo in the susceptible hybrid; whereas in the resistant hybrid, approximately half of the endosperm had been colonized and very few cells in the embryo were colonized. Fungal colonization in some of the kernels of the resistant hybrid was slowed in the aleurone layer or at the endosperm-scutellum interface. In wounded kernels with intact aleurone layers, the fungus spread around the kernel between the pericarp and aleurone layer with minimal colonization of the endosperm. Aflatoxin B1 was first detected in susceptible kernel tissues 8 DAI in 2013 (14 μg/kg and 2014 (18 μg/kg. The resistant hybrid had significantly lower levels of aflatoxin accumulation compared to the susceptible hybrid at harvests 10, 21, and 28 DAI in 2013, and 20 and 24 DAI in 2014. Our study found differential A. flavus colonization of susceptible and resistant kernel

  6. Salus: Kernel Support for Secure Process Compartments

    Directory of Open Access Journals (Sweden)

    Raoul Strackx

    2015-01-01

    Full Text Available Consumer devices are increasingly being used to perform security and privacy critical tasks. The software used to perform these tasks is often vulnerable to attacks, due to bugs in the application itself or in included software libraries. Recent work proposes the isolation of security-sensitive parts of applications into protected modules, each of which can be accessed only through a predefined public interface. But most parts of an application can be considered security-sensitive at some level, and an attacker who is able to gain inapplication level access may be able to abuse services from protected modules. We propose Salus, a Linux kernel modification that provides a novel approach for partitioning processes into isolated compartments sharing the same address space. Salus significantly reduces the impact of insecure interfaces and vulnerable compartments by enabling compartments (1 to restrict the system calls they are allowed to perform, (2 to authenticate their callers and callees and (3 to enforce that they can only be accessed via unforgeable references. We describe the design of Salus, report on a prototype implementation and evaluate it in terms of security and performance. We show that Salus provides a significant security improvement with a low performance overhead, without relying on any non-standard hardware support.

  7. Local Kernel for Brains Classification in Schizophrenia

    Science.gov (United States)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  8. KERNEL MAD ALGORITHM FOR RELATIVE RADIOMETRIC NORMALIZATION

    Directory of Open Access Journals (Sweden)

    Y. Bai

    2016-06-01

    Full Text Available The multivariate alteration detection (MAD algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA. The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1 data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  9. Grain characterization and milling behaviour of near-isogenic lines differing by hardness.

    Science.gov (United States)

    Greffeuille, V; Abecassis, J; Rousset, M; Oury, F-X; Faye, A; L'Helgouac'h, C Bar; Lullien-Pellerin, V

    2006-12-01

    Wheat grain hardness is a major factor affecting the milling behaviour and end-product quality although its exact structural and biochemical basis is still not understood. This study describes the development of new near-isogenic lines selected on hardness. Hard and soft sister lines were characterised by near infrared reflectance (NIR) and particle size index (PSI) hardness index, grain protein content, thousand kernel weight and vitreousness. The milling behaviour of these wheat lines was evaluated on an instrumented micromill which also measures the grinding energy and flour particle size distribution was investigated by laser diffraction. Endosperm mechanical properties were measured using compression tests. Results pointed out the respective effect of hardness and vitreousness on those characteristics. Hardness was shown to influence both the mode of fracture and the mechanical properties of the whole grain and endosperm. Thus, this parameter also acts on milling behaviour. On the other hand, vitreousness was found to mainly play a role on the energy required to break the grain. This study allows us to distinguish between consequences of hardness and vitreousness. Hardness is suggested to influence the adhesion forces between starch granules and protein matrix whereas vitreousness would rather be related to the endosperm microstructure.

  10. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  11. A new discrete dipole kernel for quantitative susceptibility mapping.

    Science.gov (United States)

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Science.gov (United States)

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  13. Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code

    International Nuclear Information System (INIS)

    Rothenstein, W.

    1999-01-01

    In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T

  14. Spatial and temporal structures of impulsive bursts from solar flares observed in UV and hard X-rays

    Science.gov (United States)

    Cheng, C.-C.; Tandberg-Hanssen, E.; Bruner, E. C.; Orwig, L.; Frost, K. J.; Kenny, P. J.; Woodgate, B. E.; Shine, R. A.

    1981-01-01

    New observations are presented of impulsive UV and hard X-rays bursts in two solar flares obtained with instruments on Solar Maximum Mission. The UV bursts were observed in the Si IV and O IV emission lines, whose intensity ratio is density-sensitive. By comparing the spatially resolved Si IV/O IV observations with the corresponding hard X-ray observations, it is possible to study their spatial and temporal relationships. For one flare, the individual component spikes in the multiply peaked hard X-ray burst can be identified with different discrete Si IV/O IV flaring kernels of size 4 arcsec x 4 arcsec or smaller, which brighten up sequentially in time. For the other, many Si IV/O kernels, widely distributed over a large area, show impulsive bursts at the same time, which correlate with the main peak of the impulsive hard X-ray burst. The density of the flaring Si IV/O IV kernels is in the range from 5 x 10 to the 12th-13th/cu cm.

  15. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    Science.gov (United States)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  16. Asymptotics for the Fredholm Determinant of the Sine Kernel on a Union of Intervals

    OpenAIRE

    Widom, Harold

    1994-01-01

    In the bulk scaling limit of the Gaussian Unitary Ensemble of Hermitian matrices the probability that an interval of length $s$ contains no eigenvalues is the Fredholm determinant of the sine kernel $\\sin(x-y)\\over\\pi(x-y)$ over this interval. A formal asymptotic expansion for the determinant as $s$ tends to infinity was obtained by Dyson. In this paper we replace a single interval of length $s$ by $sJ$ where $J$ is a union of $m$ intervals and present a proof of the asymptotics up to second ...

  17. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  18. Scientific opinion on the acute health risks related to the presence of cyanogenic glycosides in raw apricot kernels and products derived from raw apricot kernels

    DEFF Research Database (Denmark)

    Petersen, Annette

    of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...

  19. Micro-hardness of non-irradiated uranium dioxide

    International Nuclear Information System (INIS)

    Kim, Sung-Sik; Takagi, Osamu; Obata, Naomi; Kirihara, Tomoo.

    1983-01-01

    In order to obtain the optimum conditions for micro-hardness measurements of sintered UO 2 , two kinds of hardness tests (Vickers and Knoop) were examined with non-irradiated UO 2 of 2.5 and 5 μm in grain size. The hardness values were obtained as a function of the applied load in the load range of 25 -- 1,000 g. In the Vickers test, cracks were generated around the periphery of an indentation even at lower load of 50 g, which means the Vickers hardness is not suitable for UO 2 specimens. In the Knoop test, three stages of load dependence were observed for sintered pellet as well as for a single crystal by Bates. Load dependence of Knoop hardness and crack formation were discussed. In the range of applied load around 70 -- 100 g there were plateau region where hardness values were nearly unchanged and did not contain any cracks in the indentation. The plateau region represents a hardness of a specimen. From a comparison between the hardness values of 2.5 μm and those of 5 μm UO 2 , it was approved that the degree of sintering controls the hardness in the plateau region. (author)

  20. Hard and Soft Governance

    DEFF Research Database (Denmark)

    Moos, Lejf

    2009-01-01

    of Denmark, and finally the third layer: the leadership used in Danish schools. The use of 'soft governance' is shifting the focus of governance and leadership from decisions towards influence and power and thus shifting the focus of the processes from the decision-making itself towards more focus......The governance and leadership at transnational, national and school level seem to be converging into a number of isomorphic forms as we see a tendency towards substituting 'hard' forms of governance, that are legally binding, with 'soft' forms based on persuasion and advice. This article analyses...... and discusses governance forms at several levels. The first layer is the global: the methods of 'soft governance' that are being utilised by transnational agencies. The second layer is the national and local: the shift in national and local governance seen in many countries, but here demonstrated in the case...

  1. Zirconium nitride hard coatings

    International Nuclear Information System (INIS)

    Roman, Daiane; Amorim, Cintia Lugnani Gomes de; Soares, Gabriel Vieira; Figueroa, Carlos Alejandro; Baumvol, Israel Jacob Rabin; Basso, Rodrigo Leonardo de Oliveira

    2010-01-01

    Zirconium nitride (ZrN) nanometric films were deposited onto different substrates, in order to study the surface crystalline microstructure and also to investigate the electrochemical behavior to obtain a better composition that minimizes corrosion reactions. The coatings were produced by physical vapor deposition (PVD). The influence of the nitrogen partial pressure, deposition time and temperature over the surface properties was studied. Rutherford backscattering spectrometry (RBS), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), scanning electron microscopy (SEM) and corrosion experiments were performed to characterize the ZrN hard coatings. The ZrN films properties and microstructure changes according to the deposition parameters. The corrosion resistance increases with temperature used in the films deposition. Corrosion tests show that ZrN coating deposited by PVD onto titanium substrate can improve the corrosion resistance. (author)

  2. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    Science.gov (United States)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full

  3. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  4. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  5. Generalized synthetic kernel approximation for elastic moderation of fast neutrons

    International Nuclear Information System (INIS)

    Yamamoto, Koji; Sekiya, Tamotsu; Yamamura, Yasunori.

    1975-01-01

    A method of synthetic kernel approximation is examined in some detail with a view to simplifying the treatment of the elastic moderation of fast neutrons. A sequence of unified kernel (fsub(N)) is introduced, which is then divided into two subsequences (Wsub(n)) and (Gsub(n)) according to whether N is odd (Wsub(n)=fsub(2n-1), n=1,2, ...) or even (Gsub(n)=fsub(2n), n=0,1, ...). The W 1 and G 1 kernels correspond to the usual Wigner and GG kernels, respectively, and the Wsub(n) and Gsub(n) kernels for n>=2 represent generalizations thereof. It is shown that the Wsub(n) kernel solution with a relatively small n (>=2) is superior on the whole to the Gsub(n) kernel solution for the same index n, while both converge to the exact values with increasing n. To evaluate the collision density numerically and rapidly, a simple recurrence formula is derived. In the asymptotic region (except near resonances), this recurrence formula allows calculation with a relatively coarse mesh width whenever hsub(a)<=0.05 at least. For calculations in the transient lethargy region, a mesh width of order epsilon/10 is small enough to evaluate the approximate collision density psisub(N) with an accuracy comparable to that obtained analytically. It is shown that, with the present method, an order of approximation of about n=7 should yield a practically correct solution diviating not more than 1% in collision density. (auth.)

  6. Impact of Triticum mosaic virus infection on hard winter wheat milling and bread baking quality.

    Science.gov (United States)

    Miller, Rebecca A; Martin, T Joe; Seifers, Dallas L

    2012-03-15

    Triticum mosaic virus (TriMV) is a newly discovered wheat virus. Information regarding the effect of wheat viruses on milling and baking quality is limited. The objective of this study was to determine the impact of TriMV infection on the kernel characteristics, milling yield and bread baking quality of wheat. Commercial hard winter varieties evaluated included RonL, Danby and Jagalene. The TriMV resistance of RonL is low, while that of Danby and Jagalene is unknown. KS96HW10-3, a germplasm with high TriMV resistance, was included as a control. Plots of each variety were inoculated with TriMV at the two- to three-leaf stage. Trials were conducted at two locations in two crop years. TriMV infection had no effect on the kernel characteristics, flour yield or baking properties of KS96HW10-3. The effect of TriMV on the kernel characteristics of RonL, Danby and Jagalene was not consistent between crop years and presumably an environmental effect. The flour milling and bread baking properties of these three varieties were not significantly affected by TriMV infection. TriMV infection of wheat plants did not affect harvested wheat kernel characteristics, flour milling properties or white pan bread baking quality. Copyright © 2011 Society of Chemical Industry.

  7. Near infrared hyperspectral imaging of blends of conventional and waxy hard wheats

    Directory of Open Access Journals (Sweden)

    Stephen R. Delwiche

    2018-02-01

    Full Text Available Recent development of hard winter waxy (amylose-free wheat adapted to the North American climate has prompted the quest to find a rapid method that will determine mixture levels of conventional wheat in lots of identity preserved waxy wheat. Previous work documented the use of conventional near infrared (NIR reflectance spectroscopy to determine the mixture level of conventional wheat in waxy wheat, with an examined range, through binary sample mixture preparation, of 0–100% (weight conventional / weight total. The current study examines the ability of NIR hyperspectral imaging of intact kernels to determine mixture levels. Twenty-nine mixtures (0, 1, 2, 3, 4, 5, 10, 15, …, 95, 96, 97, 98, 99, 100% were formed from known genotypes of waxy and conventional wheat. Two-class partial least squares discriminant analysis (PLSDA and statistical pattern recognition classifier models were developed for identifying each kernel in the images as conventional or waxy. Along with these approaches, conventional PLS1 regression modelling was performed on means of kernel spectra within each mixture test sample. Results indicated close agreement between all three approaches, with standard errors of prediction for the better preprocess transformations (PLSDA models or better classifiers (pattern recognition models of approximately 9 percentage units. Although such error rates were slightly greater than ones previously published using non-imaging NIR analysis of bulk whole kernel wheat and wheat meal, the HSI technique offers an advantage of its potential use in sorting operations.

  8. Janka hardness using nonstandard specimens

    Science.gov (United States)

    David W. Green; Marshall Begel; William Nelson

    2006-01-01

    Janka hardness determined on 1.5- by 3.5-in. specimens (2×4s) was found to be equivalent to that determined using the 2- by 2-in. specimen specified in ASTM D 143. Data are presented on the relationship between Janka hardness and the strength of clear wood. Analysis of historical data determined using standard specimens indicated no difference between side hardness...

  9. 2TB hard disk drive

    CERN Multimedia

    This particular object was used up until 2012 in the Data Centre. It slots into one of the Disk Server trays. Hard disks were invented in the 1950s. They started as large disks up to 20 inches in diameter holding just a few megabytes (link is external). They were originally called "fixed disks" or "Winchesters" (a code name used for a popular IBM product). They later became known as "hard disks" to distinguish them from "floppy disks (link is external)." Hard disks have a hard platter that holds the magnetic medium, as opposed to the flexible plastic film found in tapes and floppies.

  10. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  11. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    Science.gov (United States)

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  12. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  13. On flame kernel formation and propagation in premixed gases

    Energy Technology Data Exchange (ETDEWEB)

    Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  14. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    Science.gov (United States)

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  15. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  17. Dissection of Genetic Factors underlying Wheat Kernel Shape and Size in an Elite × Nonadapted Cross using a High Density SNP Linkage Map

    Directory of Open Access Journals (Sweden)

    Ajay Kumar

    2016-03-01

    Full Text Available Wheat kernel shape and size has been under selection since early domestication. Kernel morphology is a major consideration in wheat breeding, as it impacts grain yield and quality. A population of 160 recombinant inbred lines (RIL, developed using an elite (ND 705 and a nonadapted genotype (PI 414566, was extensively phenotyped in replicated field trials and genotyped using Infinium iSelect 90K assay to gain insight into the genetic architecture of kernel shape and size. A high density genetic map consisting of 10,172 single nucleotide polymorphism (SNP markers, with an average marker density of 0.39 cM/marker, identified a total of 29 genomic regions associated with six grain shape and size traits; ∼80% of these regions were associated with multiple traits. The analyses showed that kernel length (KL and width (KW are genetically independent, while a large number (∼59% of the quantitative trait loci (QTL for kernel shape traits were in common with genomic regions associated with kernel size traits. The most significant QTL was identified on chromosome 4B, and could be an ortholog of major rice grain size and shape gene or . Major and stable loci also were identified on the homeologous regions of Group 5 chromosomes, and in the regions of (6A and (7A genes. Both parental genotypes contributed equivalent positive QTL alleles, suggesting that the nonadapted germplasm has a great potential for enhancing the gene pool for grain shape and size. This study provides new knowledge on the genetic dissection of kernel morphology, with a much higher resolution, which may aid further improvement in wheat yield and quality using genomic tools.

  18. Dissection of Genetic Factors underlying Wheat Kernel Shape and Size in an Elite × Nonadapted Cross using a High Density SNP Linkage Map.

    Science.gov (United States)

    Kumar, Ajay; Mantovani, E E; Seetan, R; Soltani, A; Echeverry-Solarte, M; Jain, S; Simsek, S; Doehlert, D; Alamri, M S; Elias, E M; Kianian, S F; Mergoum, M

    2016-03-01

    Wheat kernel shape and size has been under selection since early domestication. Kernel morphology is a major consideration in wheat breeding, as it impacts grain yield and quality. A population of 160 recombinant inbred lines (RIL), developed using an elite (ND 705) and a nonadapted genotype (PI 414566), was extensively phenotyped in replicated field trials and genotyped using Infinium iSelect 90K assay to gain insight into the genetic architecture of kernel shape and size. A high density genetic map consisting of 10,172 single nucleotide polymorphism (SNP) markers, with an average marker density of 0.39 cM/marker, identified a total of 29 genomic regions associated with six grain shape and size traits; ∼80% of these regions were associated with multiple traits. The analyses showed that kernel length (KL) and width (KW) are genetically independent, while a large number (∼59%) of the quantitative trait loci (QTL) for kernel shape traits were in common with genomic regions associated with kernel size traits. The most significant QTL was identified on chromosome 4B, and could be an ortholog of major rice grain size and shape gene or . Major and stable loci also were identified on the homeologous regions of Group 5 chromosomes, and in the regions of (6A) and (7A) genes. Both parental genotypes contributed equivalent positive QTL alleles, suggesting that the nonadapted germplasm has a great potential for enhancing the gene pool for grain shape and size. This study provides new knowledge on the genetic dissection of kernel morphology, with a much higher resolution, which may aid further improvement in wheat yield and quality using genomic tools. Copyright © 2016 Crop Science Society of America.

  19. Hard processes. Vol. 1

    International Nuclear Information System (INIS)

    Ioffe, B.L.; Khoze, V.A.; Lipatov, L.N.

    1984-01-01

    Deep inelastic (hard) processes are now at the epicenter of modern high-energy physics. These processes are governed by short-distance dynamics, which reveals the intrinsic structure of elementary particles. The theory of deep inelastic processes is now sufficiently well settled. The authors' aim was to give an effective tool to theoreticians and experimentalists who are engaged in high-energy physics. This book is intended primarily for physicists who are only beginning to study the field. To read the book, one should be acquainted with the Feynman diagram technique and with some particular topics from elementary particle theory (symmetries, dispersion relations, Regge pole theory, etc.). Theoretical consideration of deep inelastic processes is now based on quantum chromodynamics (QCD). At the same time, analysis of relevant physical phenomena demands a synthesis of QCD notions (quarks, gluons) with certain empirical characteristics. Therefore, the phenomenological approaches presented are a necessary stage in a study of this range of phenomena which should undoubtedly be followed by a detailed description based on QCD and electroweak theory. The authors were naturally unable to dwell on experimental data accumulated during the past decade of intensive investigations. Priority was given to results which allow a direct comparison with theoretical predictions. (Auth.)

  20. Hard coal; Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Loo, Kai van de; Sitte, Andreas-Peter [Gesamtverband Steinkohle e.V. (GVSt), Herne (Germany)

    2015-07-01

    International the coal market in 2014 was the first time in a long time in a period of stagnation. In Germany, the coal consumption decreased even significantly, mainly due to the decrease in power generation. Here the national energy transition has now been noticable affected negative for coal use. The political guidances can expect a further significant downward movement for the future. In the present phase-out process of the German hard coal industry with still three active mines there was in 2014 no decommissioning. But the next is at the end of 2015, and the plans for the time after mining have been continued. [German] International war der Markt fuer Steinkohle 2014 erstmals seit langem wieder von einer Stagnation gekennzeichnet. In Deutschland ging der Steinkohlenverbrauch sogar deutlich zurueck, vor allem wegen des Rueckgangs in der Stromerzeugung. Hier hat sich die nationale Energiewende nun spuerbar und fuer die Steinkohlennutzung negativ ausgewirkt. Die politischen Weichenstellungen lassen fuer die Zukunft eine weitere erhebliche Abwaertsbewegung erwarten. Bei dem im Auslaufprozess befindlichen deutschen Steinkohlenbergbau mit noch drei aktiven Bergwerken gab es 2014 keine Stilllegung. Doch die naechste steht zum Jahresende 2015 an, und die Planungen fuer die Zeit nach dem Bergbau sind fortgefuehrt worden.

  1. Semisupervised kernel marginal Fisher analysis for face recognition.

    Science.gov (United States)

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  2. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  3. Heat Kernel Asymptotics of Zaremba Boundary Value Problem

    Energy Technology Data Exchange (ETDEWEB)

    Avramidi, Ivan G. [Department of Mathematics, New Mexico Institute of Mining and Technology (United States)], E-mail: iavramid@nmt.edu

    2004-03-15

    The Zaremba boundary-value problem is a boundary value problem for Laplace-type second-order partial differential operators acting on smooth sections of a vector bundle over a smooth compact Riemannian manifold with smooth boundary but with discontinuous boundary conditions, which include Dirichlet boundary conditions on one part of the boundary and Neumann boundary conditions on another part of the boundary. We study the heat kernel asymptotics of Zaremba boundary value problem. The construction of the asymptotic solution of the heat equation is described in detail and the heat kernel is computed explicitly in the leading approximation. Some of the first nontrivial coefficients of the heat kernel asymptotic expansion are computed explicitly.

  4. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    Science.gov (United States)

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  5. Rational kernels for Arabic Root Extraction and Text Classification

    Directory of Open Access Journals (Sweden)

    Attia Nehar

    2016-04-01

    Full Text Available In this paper, we address the problems of Arabic Text Classification and root extraction using transducers and rational kernels. We introduce a new root extraction approach on the basis of the use of Arabic patterns (Pattern Based Stemmer. Transducers are used to model these patterns and root extraction is done without relying on any dictionary. Using transducers for extracting roots, documents are transformed into finite state transducers. This document representation allows us to use and explore rational kernels as a framework for Arabic Text Classification. Root extraction experiments are conducted on three word collections and yield 75.6% of accuracy. Classification experiments are done on the Saudi Press Agency dataset and N-gram kernels are tested with different values of N. Accuracy and F1 report 90.79% and 62.93% respectively. These results show that our approach, when compared with other approaches, is promising specially in terms of accuracy and F1.

  6. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  7. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  8. Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee

    2011-01-01

    The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application

  9. Melting of polydisperse hard disks

    NARCIS (Netherlands)

    Pronk, S.; Frenkel, D.

    2004-01-01

    The melting of a polydisperse hard-disk system is investigated by Monte Carlo simulations in the semigrand canonical ensemble. This is done in the context of possible continuous melting by a dislocation-unbinding mechanism, as an extension of the two-dimensional hard-disk melting problem. We find

  10. Theoretical developments for interpreting kernel spectral clustering from alternative viewpoints

    Directory of Open Access Journals (Sweden)

    Diego Peluffo-Ordóñez

    2017-08-01

    Full Text Available To perform an exploration process over complex structured data within unsupervised settings, the so-called kernel spectral clustering (KSC is one of the most recommended and appealing approaches, given its versatility and elegant formulation. In this work, we explore the relationship between (KSC and other well-known approaches, namely normalized cut clustering and kernel k-means. To do so, we first deduce a generic KSC model from a primal-dual formulation based on least-squares support-vector machines (LS-SVM. For experiments, KSC as well as other consider methods are assessed on image segmentation tasks to prove their usability.

  11. Modelling microwave heating of discrete samples of oil palm kernels

    International Nuclear Information System (INIS)

    Law, M.C.; Liew, E.L.; Chang, S.L.; Chan, Y.S.; Leo, C.P.

    2016-01-01

    Highlights: • Microwave (MW) drying of oil palm kernels is experimentally determined and modelled. • MW heating of discrete samples of oil palm kernels (OPKs) is simulated. • OPK heating is due to contact effect, MW interference and heat transfer mechanisms. • Electric field vectors circulate within OPKs sample. • Loosely-packed arrangement improves temperature uniformity of OPKs. - Abstract: Recently, microwave (MW) pre-treatment of fresh palm fruits has showed to be environmentally friendly compared to the existing oil palm milling process as it eliminates the condensate production of palm oil mill effluent (POME) in the sterilization process. Moreover, MW-treated oil palm fruits (OPF) also possess better oil quality. In this work, the MW drying kinetic of the oil palm kernels (OPK) was determined experimentally. Microwave heating/drying of oil palm kernels was modelled and validated. The simulation results show that temperature of an OPK is not the same over the entire surface due to constructive and destructive interferences of MW irradiance. The volume-averaged temperature of an OPK is higher than its surface temperature by 3–7 °C, depending on the MW input power. This implies that point measurement of temperature reading is inadequate to determine the temperature history of the OPK during the microwave heating process. The simulation results also show that arrangement of OPKs in a MW cavity affects the kernel temperature profile. The heating of OPKs were identified to be affected by factors such as local electric field intensity due to MW absorption, refraction, interference, the contact effect between kernels and also heat transfer mechanisms. The thermal gradient patterns of OPKs change as the heating continues. The cracking of OPKs is expected to occur first in the core of the kernel and then it propagates to the kernel surface. The model indicates that drying of OPKs is a much slower process compared to its MW heating. The model is useful

  12. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1983-01-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The basic result is the application of graphical methods to the derivation of interaction-set equations. This yields a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  13. Reproducing Kernel Method for Solving Nonlinear Differential-Difference Equations

    Directory of Open Access Journals (Sweden)

    Reza Mokhtari

    2012-01-01

    Full Text Available On the basis of reproducing kernel Hilbert spaces theory, an iterative algorithm for solving some nonlinear differential-difference equations (NDDEs is presented. The analytical solution is shown in a series form in a reproducing kernel space, and the approximate solution , is constructed by truncating the series to terms. The convergence of , to the analytical solution is also proved. Results obtained by the proposed method imply that it can be considered as a simple and accurate method for solving such differential-difference problems.

  14. Kernel and divergence techniques in high energy physics separations

    Science.gov (United States)

    Bouř, Petr; Kůs, Václav; Franc, Jiří

    2017-10-01

    Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.

  15. Rebootless Linux Kernel Patching with Ksplice Uptrack at BNL

    International Nuclear Information System (INIS)

    Hollowell, Christopher; Pryor, James; Smith, Jason

    2012-01-01

    Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2,000 hosts running Scientific Linux and Red Hat Enterprise Linux. The use of this software has minimized downtime, and increased our security posture. In this paper, we provide an overview of Ksplice's rebootless kernel patch creation/insertion mechanism, and our experiences with Uptrack.

  16. Employment of kernel methods on wind turbine power performance assessment

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.

    2015-01-01

    A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...

  17. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm...... is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features for genre prediction. The upshot is that the method has strong expressive power even with rather few features, is clearly outperforming the ordinary kernel PLS, and therefore is an appealing method...

  18. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn by Measurement of Oil Content.

    Directory of Open Access Journals (Sweden)

    Hongzhi Wang

    Full Text Available One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed.

  19. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn) by Measurement of Oil Content

    Science.gov (United States)

    Xu, Xiaoping; Huang, Qingming; Chen, Shanshan; Yang, Peiqiang; Chen, Shaojiang; Song, Yiqiao

    2016-01-01

    One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed. PMID:27454427

  20. Supervised Kernel Optimized Locality Preserving Projection with Its Application to Face Recognition and Palm Biometrics

    Directory of Open Access Journals (Sweden)

    Chuang Lin

    2015-01-01

    Full Text Available Kernel Locality Preserving Projection (KLPP algorithm can effectively preserve the neighborhood structure of the database using the kernel trick. We have known that supervised KLPP (SKLPP can preserve within-class geometric structures by using label information. However, the conventional SKLPP algorithm endures the kernel selection which has significant impact on the performances of SKLPP. In order to overcome this limitation, a method named supervised kernel optimized LPP (SKOLPP is proposed in this paper, which can maximize the class separability in kernel learning. The proposed method maps the data from the original space to a higher dimensional kernel space using a data-dependent kernel. The adaptive parameters of the data-dependent kernel are automatically calculated through optimizing an objective function. Consequently, the nonlinear features extracted by SKOLPP have larger discriminative ability compared with SKLPP and are more adaptive to the input data. Experimental results on ORL, Yale, AR, and Palmprint databases showed the effectiveness of the proposed method.