WorldWideScience

Sample records for response kernel model

  1. LINEAR KERNEL SUPPORT VECTOR MACHINES FOR MODELING PORE-WATER PRESSURE RESPONSES

    KHAMARUZAMAN W. YUSOF

    2017-08-01

    Full Text Available Pore-water pressure responses are vital in many aspects of slope management, design and monitoring. Its measurement however, is difficult, expensive and time consuming. Studies on its predictions are lacking. Support vector machines with linear kernel was used here to predict the responses of pore-water pressure to rainfall. Pore-water pressure response data was collected from slope instrumentation program. Support vector machine meta-parameter calibration and model development was carried out using grid search and k-fold cross validation. The mean square error for the model on scaled test data is 0.0015 and the coefficient of determination is 0.9321. Although pore-water pressure response to rainfall is a complex nonlinear process, the use of linear kernel support vector machine can be employed where high accuracy can be sacrificed for computational ease and time.

  2. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  3. Kernel regression with functional response

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  4. Model selection for Gaussian kernel PCA denoising

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  5. An SVM model with hybrid kernels for hydrological time series

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  6. Improved modeling of clinical data with kernel methods.

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  7. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  8. Modeling of the endosperm crush response profile of hard red spring wheat using a single kernel characterization system

    When a wheat endosperm is crushed the force profile shows viscoelastic response and the modulus of elasticity is an important parameter that might have substantial influence on wheat milling. An experiment was performed to model endosperm crush response profile (ECRP) and to determine the modulus o...

  9. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  10. Model Selection in Kernel Ridge Regression

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  11. Model selection in kernel ridge regression

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  12. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  13. Differential metabolome analysis of field-grown maize kernels in response to drought stress

    Drought stress constrains maize kernel development and can exacerbate aflatoxin contamination. In order to identify drought responsive metabolites and explore pathways involved in kernel responses, a metabolomics analysis was conducted on kernels from a drought tolerant line, Lo964, and a sensitive ...

  14. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  15. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  16. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  17. Spectral Kernel Approach to Study Radiative Response of Climate Variables and Interannual Variability of Reflected Solar Spectrum

    Jin, Zhonghai; Wielicki, Bruce A.; Loukachine, Constantin; Charlock, Thomas P.; Young, David; Noeel, Stefan

    2011-01-01

    The radiative kernel approach provides a simple way to separate the radiative response to different climate parameters and to decompose the feedback into radiative and climate response components. Using CERES/MODIS/Geostationary data, we calculated and analyzed the solar spectral reflectance kernels for various climate parameters on zonal, regional, and global spatial scales. The kernel linearity is tested. Errors in the kernel due to nonlinearity can vary strongly depending on climate parameter, wavelength, surface, and solar elevation; they are large in some absorption bands for some parameters but are negligible in most conditions. The spectral kernels are used to calculate the radiative responses to different climate parameter changes in different latitudes. The results show that the radiative response in high latitudes is sensitive to the coverage of snow and sea ice. The radiative response in low latitudes is contributed mainly by cloud property changes, especially cloud fraction and optical depth. The large cloud height effect is confined to absorption bands, while the cloud particle size effect is found mainly in the near infrared. The kernel approach, which is based on calculations using CERES retrievals, is then tested by direct comparison with spectral measurements from Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) (a different instrument on a different spacecraft). The monthly mean interannual variability of spectral reflectance based on the kernel technique is consistent with satellite observations over the ocean, but not over land, where both model and data have large uncertainty. RMS errors in kernel ]derived monthly global mean reflectance over the ocean compared to observations are about 0.001, and the sampling error is likely a major component.

  18. Modelling microwave heating of discrete samples of oil palm kernels

    Law, M.C.; Liew, E.L.; Chang, S.L.; Chan, Y.S.; Leo, C.P.

    2016-01-01

    Highlights: • Microwave (MW) drying of oil palm kernels is experimentally determined and modelled. • MW heating of discrete samples of oil palm kernels (OPKs) is simulated. • OPK heating is due to contact effect, MW interference and heat transfer mechanisms. • Electric field vectors circulate within OPKs sample. • Loosely-packed arrangement improves temperature uniformity of OPKs. - Abstract: Recently, microwave (MW) pre-treatment of fresh palm fruits has showed to be environmentally friendly compared to the existing oil palm milling process as it eliminates the condensate production of palm oil mill effluent (POME) in the sterilization process. Moreover, MW-treated oil palm fruits (OPF) also possess better oil quality. In this work, the MW drying kinetic of the oil palm kernels (OPK) was determined experimentally. Microwave heating/drying of oil palm kernels was modelled and validated. The simulation results show that temperature of an OPK is not the same over the entire surface due to constructive and destructive interferences of MW irradiance. The volume-averaged temperature of an OPK is higher than its surface temperature by 3–7 °C, depending on the MW input power. This implies that point measurement of temperature reading is inadequate to determine the temperature history of the OPK during the microwave heating process. The simulation results also show that arrangement of OPKs in a MW cavity affects the kernel temperature profile. The heating of OPKs were identified to be affected by factors such as local electric field intensity due to MW absorption, refraction, interference, the contact effect between kernels and also heat transfer mechanisms. The thermal gradient patterns of OPKs change as the heating continues. The cracking of OPKs is expected to occur first in the core of the kernel and then it propagates to the kernel surface. The model indicates that drying of OPKs is a much slower process compared to its MW heating. The model is useful

  19. Comparison of Kernel Equating and Item Response Theory Equating Methods

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  20. Analysis of Drude model using fractional derivatives without singular kernels

    Jiménez Leonardo Martínez

    2017-11-01

    Full Text Available We report study exploring the fractional Drude model in the time domain, using fractional derivatives without singular kernels, Caputo-Fabrizio (CF, and fractional derivatives with a stretched Mittag-Leffler function. It is shown that the velocity and current density of electrons moving through a metal depend on both the time and the fractional order 0 < γ ≤ 1. Due to non-singular fractional kernels, it is possible to consider complete memory effects in the model, which appear neither in the ordinary model, nor in the fractional Drude model with Caputo fractional derivative. A comparison is also made between these two representations of the fractional derivatives, resulting a considered difference when γ < 0.8.

  1. Modeling reconsolidation in kernel associative memory.

    Dimitri Nowicki

    Full Text Available Memory reconsolidation is a central process enabling adaptive memory and the perception of a constantly changing reality. It causes memories to be strengthened, weakened or changed following their recall. A computational model of memory reconsolidation is presented. Unlike Hopfield-type memory models, our model introduces an unbounded number of attractors that are updatable and can process real-valued, large, realistic stimuli. Our model replicates three characteristic effects of the reconsolidation process on human memory: increased association, extinction of fear memories, and the ability to track and follow gradually changing objects. In addition to this behavioral validation, a continuous time version of the reconsolidation model is introduced. This version extends average rate dynamic models of brain circuits exhibiting persistent activity to include adaptivity and an unbounded number of attractors.

  2. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  3. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo

    2016-01-01

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970

  4. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Jaime Cuevas

    2017-01-01

    Full Text Available The phenomenon of genotype × environment (G × E interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects ( u that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP and Gaussian (Gaussian kernel, GK. The other model has the same genetic component as the first model ( u plus an extra component, f, that captures random effects between environments that were not captured by the random effects u . We used five CIMMYT data sets (one maize and four wheat that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u   and   f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u .

  5. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo

    2017-01-05

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.

  6. Kuramoto model for infinite graphs with kernels

    Canale, Eduardo

    2015-01-07

    In this paper we study the Kuramoto model of weakly coupled oscillators for the case of non trivial network with large number of nodes. We approximate of such configurations by a McKean-Vlasov stochastic differential equation based on infinite graph. We focus on circulant graphs which have enough symmetries to make the computations easier. We then focus on the asymptotic regime where an integro-partial differential equation is derived. Numerical analysis and convergence proofs of the Fokker-Planck-Kolmogorov equation are conducted. Finally, we provide numerical examples that illustrate the convergence of our method.

  7. Spatial Modeling Of Infant Mortality Rate In South Central Timor Regency Using GWLR Method With Adaptive Bisquare Kernel And Gaussian Kernel

    Teguh Prawono Sabat

    2017-08-01

    Full Text Available Geographically Weighted Logistic Regression (GWLR was regression model consider the spatial factor, which could be used to analyze the IMR. The number of Infant Mortality as big as 100 cases in 2015 or 12 per 1000 live birth in South Central Timor Regency. The aim of this study was to determine the best modeling of GWLR with fixed weighting function and Adaptive Gaussian Kernel in the case of infant mortality in South Central Timor District in 2015. The response variable (Y in this study was a case of infant mortality, while variable predictor was the percentage of neonatal first visit (KN1 (X1, the percentage of neonatal visit 3 times (Complete KN (X2, the percentage of pregnant get Fe tablet (X3, percentage of poor families pre prosperous (X4. This was a non-reactive study, which is a measurement which individuals surveyed did not realize that they are part of a study, with analysis unit in 32 sub-districts of South Central Timor Districts. Data analysis used open source program that was Excel, R program, Quantum GIS and GWR4. The best GWLR spatial modeling with Adaptive Gaussian Kernel weighting function, a global model parameters GWLR Adaptive Gaussian Kernel weighting function obtained by g (x = 0.941086 - 0,892506X4, GWLR local models with adaptive Kernel bisquare weighting function in the 13 Districts were obtained g(x = 0 − 0X4, factors that affect the cases of infant mortality in 13 sub-districts of South Central Timor Regency in 2015 was the percentage of poor families pre prosperous.

  8. Kernel Principal Component Analysis and its Applications in Face Recognition and Active Shape Models

    Wang, Quan

    2012-01-01

    Principal component analysis (PCA) is a popular tool for linear dimensionality reduction and feature extraction. Kernel PCA is the nonlinear form of PCA, which better exploits the complicated spatial structure of high-dimensional features. In this paper, we first review the basic ideas of PCA and kernel PCA. Then we focus on the reconstruction of pre-images for kernel PCA. We also give an introduction on how PCA is used in active shape models (ASMs), and discuss how kernel PCA can be applied ...

  9. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  10. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.

  11. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  12. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  13. Modeling reactive transport with particle tracking and kernel estimators

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  14. Using Cochran's Z Statistic to Test the Kernel-Smoothed Item Response Function Differences between Focal and Reference Groups

    Zheng, Yinggan; Gierl, Mark J.; Cui, Ying

    2010-01-01

    This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…

  15. Abiotic stress growth conditions induce different responses in kernel iron concentration across genotypically distinct maize inbred varieties

    Kandianis, Catherine B.; Michenfelder, Abigail S.; Simmons, Susan J.; Grusak, Michael A.; Stapleton, Ann E.

    2013-01-01

    The improvement of grain nutrient profiles for essential minerals and vitamins through breeding strategies is a target important for agricultural regions where nutrient poor crops like maize contribute a large proportion of the daily caloric intake. Kernel iron concentration in maize exhibits a broad range. However, the magnitude of genotype by environment (GxE) effects on this trait reduces the efficacy and predictability of selection programs, particularly when challenged with abiotic stress such as water and nitrogen limitations. Selection has also been limited by an inverse correlation between kernel iron concentration and the yield component of kernel size in target environments. Using 25 maize inbred lines for which extensive genome sequence data is publicly available, we evaluated the response of kernel iron density and kernel mass to water and nitrogen limitation in a managed field stress experiment using a factorial design. To further understand GxE interactions we used partition analysis to characterize response of kernel iron and weight to abiotic stressors among all genotypes, and observed two patterns: one characterized by higher kernel iron concentrations in control over stress conditions, and another with higher kernel iron concentration under drought and combined stress conditions. Breeding efforts for this nutritional trait could exploit these complementary responses through combinations of favorable allelic variation from these already well-characterized genetic stocks. PMID:24363659

  16. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard

    2011-01-01

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus on v...

  17. Evaluating and interpreting the chemical relevance of the linear response kernel for atoms II: open shell.

    Boisdenghien, Zino; Fias, Stijn; Van Alsenoy, Christian; De Proft, Frank; Geerlings, Paul

    2014-07-28

    Most of the work done on the linear response kernel χ(r,r') has focussed on its atom-atom condensed form χAB. Our previous work [Boisdenghien et al., J. Chem. Theory Comput., 2013, 9, 1007] was the first effort to truly focus on the non-condensed form of this function for closed (sub)shell atoms in a systematic fashion. In this work, we extend our method to the open shell case. To simplify the plotting of our results, we average our results to a symmetrical quantity χ(r,r'). This allows us to plot the linear response kernel for all elements up to and including argon and to investigate the periodicity throughout the first three rows in the periodic table and in the different representations of χ(r,r'). Within the context of Spin Polarized Conceptual Density Functional Theory, the first two-dimensional plots of spin polarized linear response functions are presented and commented on for some selected cases on the basis of the atomic ground state electronic configurations. Using the relation between the linear response kernel and the polarizability we compare the values of the polarizability tensor calculated using our method to high-level values.

  18. DNA sequence+shape kernel enables alignment-free modeling of transcription factor binding.

    Ma, Wenxiu; Yang, Lin; Rohs, Remo; Noble, William Stafford

    2017-10-01

    Transcription factors (TFs) bind to specific DNA sequence motifs. Several lines of evidence suggest that TF-DNA binding is mediated in part by properties of the local DNA shape: the width of the minor groove, the relative orientations of adjacent base pairs, etc. Several methods have been developed to jointly account for DNA sequence and shape properties in predicting TF binding affinity. However, a limitation of these methods is that they typically require a training set of aligned TF binding sites. We describe a sequence + shape kernel that leverages DNA sequence and shape information to better understand protein-DNA binding preference and affinity. This kernel extends an existing class of k-mer based sequence kernels, based on the recently described di-mismatch kernel. Using three in vitro benchmark datasets, derived from universal protein binding microarrays (uPBMs), genomic context PBMs (gcPBMs) and SELEX-seq data, we demonstrate that incorporating DNA shape information improves our ability to predict protein-DNA binding affinity. In particular, we observe that (i) the k-spectrum + shape model performs better than the classical k-spectrum kernel, particularly for small k values; (ii) the di-mismatch kernel performs better than the k-mer kernel, for larger k; and (iii) the di-mismatch + shape kernel performs better than the di-mismatch kernel for intermediate k values. The software is available at https://bitbucket.org/wenxiu/sequence-shape.git. rohs@usc.edu or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  19. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  20. Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

    Quinonero, Joaquin; Girard, Agathe; Larsen, Jan

    2003-01-01

    The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...

  1. Auto-associative Kernel Regression Model with Weighted Distance Metric for Instrument Drift Monitoring

    Shin, Ho Cheol; Park, Moon Ghu; You, Skin

    2006-01-01

    Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression

  2. QTL detection for wheat kernel size and quality and the responses of these traits to low nitrogen stress.

    Cui, Fa; Fan, Xiaoli; Chen, Mei; Zhang, Na; Zhao, Chunhua; Zhang, Wei; Han, Jie; Ji, Jun; Zhao, Xueqiang; Yang, Lijuan; Zhao, Zongwu; Tong, Yiping; Wang, Tao; Li, Junming

    2016-03-01

    QTLs for kernel characteristics and tolerance to N stress were identified, and the functions of ten known genes with regard to these traits were specified. Kernel size and quality characteristics in wheat (Triticum aestivum L.) ultimately determine the end use of the grain and affect its commodity price, both of which are influenced by the application of nitrogen (N) fertilizer. This study characterized quantitative trait loci (QTLs) for kernel size and quality and examined the responses of these traits to low-N stress using a recombinant inbred line population derived from Kenong 9204 × Jing 411. Phenotypic analyses were conducted in five trials that each included low- and high-N treatments. We identified 109 putative additive QTLs for 11 kernel size and quality characteristics and 49 QTLs for tolerance to N stress, 27 and 14 of which were stable across the tested environments, respectively. These QTLs were distributed across all wheat chromosomes except for chromosomes 3A, 4D, 6D, and 7B. Eleven QTL clusters that simultaneously affected kernel size- and quality-related traits were identified. At nine locations, 25 of the 49 QTLs for N deficiency tolerance coincided with the QTLs for kernel characteristics, indicating their genetic independence. The feasibility of indirect selection of a superior genotype for kernel size and quality under high-N conditions in breeding programs designed for a lower input management system are discussed. In addition, we specified the functions of Glu-A1, Glu-B1, Glu-A3, Glu-B3, TaCwi-A1, TaSus2, TaGS2-D1, PPO-D1, Rht-B1, and Ha with regard to kernel characteristics and the sensitivities of these characteristics to N stress. This study provides useful information for the genetic improvement of wheat kernel size, quality, and resistance to N stress.

  3. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.

    Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-06-07

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.

  4. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    Massaine Bandeira e Sousa

    2017-06-01

    Full Text Available Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1 single-environment, main genotypic effect model (SM; (2 multi-environment, main genotypic effects model (MM; (3 multi-environment, single variance G×E deviation model (MDs; and (4 multi-environment, environment-specific variance G×E deviation model (MDe. Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB, and a nonlinear kernel Gaussian kernel (GK. The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets, having different numbers of maize hybrids evaluated in different environments for grain yield (GY, plant height (PH, and ear height (EH. Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied.

  5. A new fractional derivative without singular kernel: Application to the modelling of the steady heat flow

    Yang Xiao-Jun

    2016-01-01

    Full Text Available In this article we propose a new fractional derivative without singular kernel. We consider the potential application for modeling the steady heat-conduction problem. The analytical solution of the fractional-order heat flow is also obtained by means of the Laplace transform.

  6. Estimation of the applicability domain of kernel-based machine learning models for virtual screening

    Fechner Nikolas

    2010-03-01

    Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening

  7. Estimation of the applicability domain of kernel-based machine learning models for virtual screening.

    Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas

    2010-03-11

    The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations

  8. Robust anti-synchronization of uncertain chaotic systems based on multiple-kernel least squares support vector machine modeling

    Chen Qiang; Ren Xuemei; Na Jing

    2011-01-01

    Highlights: Model uncertainty of the system is approximated by multiple-kernel LSSVM. Approximation errors and disturbances are compensated in the controller design. Asymptotical anti-synchronization is achieved with model uncertainty and disturbances. Abstract: In this paper, we propose a robust anti-synchronization scheme based on multiple-kernel least squares support vector machine (MK-LSSVM) modeling for two uncertain chaotic systems. The multiple-kernel regression, which is a linear combination of basic kernels, is designed to approximate system uncertainties by constructing a multiple-kernel Lagrangian function and computing the corresponding regression parameters. Then, a robust feedback control based on MK-LSSVM modeling is presented and an improved update law is employed to estimate the unknown bound of the approximation error. The proposed control scheme can guarantee the asymptotic convergence of the anti-synchronization errors in the presence of system uncertainties and external disturbances. Numerical examples are provided to show the effectiveness of the proposed method.

  9. Research on a Novel Kernel Based Grey Prediction Model and Its Applications

    Xin Ma

    2016-01-01

    Full Text Available The discrete grey prediction models have attracted considerable interest of research due to its effectiveness to improve the modelling accuracy of the traditional grey prediction models. The autoregressive GM(1,1 model, abbreviated as ARGM(1,1, is a novel discrete grey model which is easy to use and accurate in prediction of approximate nonhomogeneous exponential time series. However, the ARGM(1,1 is essentially a linear model; thus, its applicability is still limited. In this paper a novel kernel based ARGM(1,1 model is proposed, abbreviated as KARGM(1,1. The KARGM(1,1 has a nonlinear function which can be expressed by a kernel function using the kernel method, and its modelling procedures are presented in details. Two case studies of predicting the monthly gas well production are carried out with the real world production data. The results of KARGM(1,1 model are compared to the existing discrete univariate grey prediction models, including ARGM(1,1, NDGM(1,1,k, DGM(1,1, and NGBMOP, and it is shown that the KARGM(1,1 outperforms the other four models.

  10. Approximate kernel competitive learning.

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Analysing the Linux kernel feature model changes using FMDiff

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  12. Analysing the Linux kernel feature model changes using FMDiff

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  13. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  14. A scatter model for fast neutron beams using convolution of diffusion kernels

    Moyers, M.F.; Horton, J.L.; Boyer, A.L.

    1988-01-01

    A new model is proposed to calculate dose distributions in materials irradiated with fast neutron beams. Scattered neutrons are transported away from the point of production within the irradiated material in the forward, lateral and backward directions, while recoil protons are transported in the forward and lateral directions. The calculation of dose distributions, such as for radiotherapy planning, is accomplished by convolving a primary attenuation distribution with a diffusion kernel. The primary attenuation distribution may be quickly calculated for any given set of beam and material conditions as it describes only the magnitude and distribution of first interaction sites. The calculation of energy diffusion kernels is very time consuming but must be calculated only once for a given energy. Energy diffusion distributions shown in this paper have been calculated using a Monte Carlo type of program. To decrease beam calculation time, convolutions are performed using a Fast Fourier Transform technique. (author)

  15. Cracking the Illiteracy Kernel: Need for a New Model

    Gafoor, K. Abdul; PG, Jayasudha

    2011-01-01

    This paper discusses the need for new model and approach to solve the problem of illiteracy of the most backward section of the society-women among scheduled and other backward classes. The empirical support for the study is testing, interview and observation conducted on the present status of 100 from among 1,500 adult learners who attended a…

  16. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  17. Modelling of Creep and Stress Relaxation Test of a Polypropylene Microfibre by Using Fraction-Exponential Kernel

    Andrea Sorzia

    2016-01-01

    Full Text Available A tensile test until breakage and a creep and relaxation test on a polypropylene fibre are carried out and the resulting creep and stress relaxation curves are fit by a model adopting a fraction-exponential kernel in the viscoelastic operator. The models using fraction-exponential functions are simpler than the complex ones obtained from combination of dashpots and springs and, furthermore, are suitable for fitting experimental data with good approximation allowing, at the same time, obtaining inverse Laplace transform in closed form. Therefore, the viscoelastic response of polypropylene fibres can be modelled straightforwardly through analytical methods. Addition of polypropylene fibres greatly improves the tensile strength of composite materials with concrete matrix. The proposed analytical model can be employed for simulating the mechanical behaviour of composite materials with embedded viscoelastic fibres.

  18. Comparisons of geoid models over Alaska computed with different Stokes' kernel modifications

    Li, X.; Wang, Y.

    2011-01-01

    Various Stokes kernel modification methods have been developed over the years. The goal of this paper is to test the most commonly used Stokes kernel modifications numerically by using Alaska as a test area and EGM08 as a reference model. The tests show that some methods are more sensitive than others to the integration cap sizes. For instance, using the methods of Vaníček and Kleusberg or Featherstone et al. with kernel modification at degree 60, the geoid decreases by 30 cm (on average) when the cap size increases from 1° to 25°. The corresponding changes in the methods of Wong and Gore and Heck and Grüninger are only at the 1 cm level. At high modification degrees, above 360, the methods of Vaníček and Kleusberg and Featherstone et al become unstable because of numerical problems in the modification coefficients; similar conclusions have been reported by Featherstone (2003). In contrast, the methods of Wong and Gore, Heck and Grüninger and the least-squares spectral combination are stable at any modification degree, though they do not provide as good fit as the best case of the Molodenskii-type methods at the GPS/Leveling benchmarks. However, certain tests for choosing the cap size and modification degree have to be performed in advance to avoid abrupt mean geoid changes if the latter methods are applied.

  19. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  20. On the Asymptotic Behavior of the Kernel Function in the Generalized Langevin Equation: A One-Dimensional Lattice Model

    Chu, Weiqi; Li, Xiantao

    2018-01-01

    We present some estimates for the memory kernel function in the generalized Langevin equation, derived using the Mori-Zwanzig formalism from a one-dimensional lattice model, in which the particles interactions are through nearest and second nearest neighbors. The kernel function can be explicitly expressed in a matrix form. The analysis focuses on the decay properties, both spatially and temporally, revealing a power-law behavior in both cases. The dependence on the level of coarse-graining is also studied.

  1. Non-linear modeling of 1H NMR metabonomic data using kernel-based orthogonal projections to latent structures optimized by simulated annealing

    Fonville, Judith M.; Bylesjoe, Max; Coen, Muireann; Nicholson, Jeremy K.; Holmes, Elaine; Lindon, John C.; Rantalainen, Mattias

    2011-01-01

    Highlights: → Non-linear modeling of metabonomic data using K-OPLS. → automated optimization of the kernel parameter by simulated annealing. → K-OPLS provides improved prediction performance for exemplar spectral data sets. → software implementation available for R and Matlab under GPL v2 license. - Abstract: Linear multivariate projection methods are frequently applied for predictive modeling of spectroscopic data in metabonomic studies. The OPLS method is a commonly used computational procedure for characterizing spectral metabonomic data, largely due to its favorable model interpretation properties providing separate descriptions of predictive variation and response-orthogonal structured noise. However, when the relationship between descriptor variables and the response is non-linear, conventional linear models will perform sub-optimally. In this study we have evaluated to what extent a non-linear model, kernel-based orthogonal projections to latent structures (K-OPLS), can provide enhanced predictive performance compared to the linear OPLS model. Just like its linear counterpart, K-OPLS provides separate model components for predictive variation and response-orthogonal structured noise. The improved model interpretation by this separate modeling is a property unique to K-OPLS in comparison to other kernel-based models. Simulated annealing (SA) was used for effective and automated optimization of the kernel-function parameter in K-OPLS (SA-K-OPLS). Our results reveal that the non-linear K-OPLS model provides improved prediction performance in three separate metabonomic data sets compared to the linear OPLS model. We also demonstrate how response-orthogonal K-OPLS components provide valuable biological interpretation of model and data. The metabonomic data sets were acquired using proton Nuclear Magnetic Resonance (NMR) spectroscopy, and include a study of the liver toxin galactosamine, a study of the nephrotoxin mercuric chloride and a study of

  2. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  4. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    Nyholm, T; Olofsson, J; Ahnesjoe, A; Karlsson, M

    2006-01-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  5. Kernel method for air quality modelling. II. Comparison with analytic solutions

    Lorimer, G S; Ross, D G

    1986-01-01

    The performance of Lorimer's (1986) kernel method for solving the advection-diffusion equation is tested for instantaneous and continuous emissions into a variety of model atmospheres. Analytical solutions are available for comparison in each case. The results indicate that a modest minicomputer is quite adequate for obtaining satisfactory precision even for the most trying test performed here, which involves a diffusivity tensor and wind speed which are nonlinear functions of the height above ground. Simulations of the same cases by the particle-in-cell technique are found to provide substantially lower accuracy even when use is made of greater computer resources.

  6. Kernel methods for deep learning

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  7. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  8. Response of broiler turkeys to graded dietary levels of Palm Kernel ...

    The performance of local broiler turkeys fed dietary treatments in which palm kernel meal (PKM) replaced maize at 0, 20, 40, 60, 80 and 100 percent was evaluated. The replacement levels of 0, 20, 40, 60, 80 and 100 percent represented diets 1, 2, 3, 4, 5 and 6 respectively. One hundred and eighty dayold unsexed turkey ...

  9. Modeling DNA affinity landscape through two-round support vector regression with weighted degree kernels

    Wang, Xiaolei

    2014-12-12

    Background: A quantitative understanding of interactions between transcription factors (TFs) and their DNA binding sites is key to the rational design of gene regulatory networks. Recent advances in high-throughput technologies have enabled high-resolution measurements of protein-DNA binding affinity. Importantly, such experiments revealed the complex nature of TF-DNA interactions, whereby the effects of nucleotide changes on the binding affinity were observed to be context dependent. A systematic method to give high-quality estimates of such complex affinity landscapes is, thus, essential to the control of gene expression and the advance of synthetic biology. Results: Here, we propose a two-round prediction method that is based on support vector regression (SVR) with weighted degree (WD) kernels. In the first round, a WD kernel with shifts and mismatches is used with SVR to detect the importance of subsequences with different lengths at different positions. The subsequences identified as important in the first round are then fed into a second WD kernel to fit the experimentally measured affinities. To our knowledge, this is the first attempt to increase the accuracy of the affinity prediction by applying two rounds of string kernels and by identifying a small number of crucial k-mers. The proposed method was tested by predicting the binding affinity landscape of Gcn4p in Saccharomyces cerevisiae using datasets from HiTS-FLIP. Our method explicitly identified important subsequences and showed significant performance improvements when compared with other state-of-the-art methods. Based on the identified important subsequences, we discovered two surprisingly stable 10-mers and one sensitive 10-mer which were not reported before. Further test on four other TFs in S. cerevisiae demonstrated the generality of our method. Conclusion: We proposed in this paper a two-round method to quantitatively model the DNA binding affinity landscape. Since the ability to modify

  10. Dispersal kernel estimation: A comparison of empirical and modelled particle dispersion in a coastal marine system

    Hrycik, Janelle M.; Chassé, Joël; Ruddick, Barry R.; Taggart, Christopher T.

    2013-11-01

    Early life-stage dispersal influences recruitment and is of significance in explaining the distribution and connectivity of marine species. Motivations for quantifying dispersal range from biodiversity conservation to the design of marine reserves and the mitigation of species invasions. Here we compare estimates of real particle dispersion in a coastal marine environment with similar estimates provided by hydrodynamic modelling. We do so by using a system of magnetically attractive particles (MAPs) and a magnetic-collector array that provides measures of Lagrangian dispersion based on the time-integration of MAPs dispersing through the array. MAPs released as a point source in a coastal marine location dispersed through the collector array over a 5-7 d period. A virtual release and observed (real-time) environmental conditions were used in a high-resolution three-dimensional hydrodynamic model to estimate the dispersal of virtual particles (VPs). The number of MAPs captured throughout the collector array and the number of VPs that passed through each corresponding model location were enumerated and compared. Although VP dispersal reflected several aspects of the observed MAP dispersal, the comparisons demonstrated model sensitivity to the small-scale (random-walk) particle diffusivity parameter (Kp). The one-dimensional dispersal kernel for the MAPs had an e-folding scale estimate in the range of 5.19-11.44 km, while those from the model simulations were comparable at 1.89-6.52 km, and also demonstrated sensitivity to Kp. Variations among comparisons are related to the value of Kp used in modelling and are postulated to be related to MAP losses from the water column and (or) shear dispersion acting on the MAPs; a process that is constrained in the model. Our demonstration indicates a promising new way of 1) quantitatively and empirically estimating the dispersal kernel in aquatic systems, and 2) quantitatively assessing and (or) improving regional hydrodynamic

  11. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    Weiying Wang

    2014-01-01

    Full Text Available Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  12. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  13. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous

  15. Deriving albedo maps for HAPEX-Sahel from ASAS data using kernel-driven BRDF models

    P. Lewis

    1999-01-01

    Full Text Available This paper describes the application and testing of a method for deriving spatial estimates of albedo from multi-angle remote sensing data. Linear kernel-driven models of surface bi-directional reflectance have been inverted against high spatial resolution multi-angular, multi- spectral airborne data of the principal cover types within the HAPEX-Sahel study site in Niger, West Africa. The airborne data are obtained from the NASA Airborne Solid-state Imaging Spectrometer (ASAS instrument, flown in Niger in September and October 1992. The maps of model parameters produced are used to estimate integrated reflectance properties related to spectral albedo. Broadband albedo has been estimated from this by weighting the spectral albedo for each pixel within the map as a function of the appropriate spectral solar irradiance and proportion of direct and diffuse illumination. Partial validation of the results was performed by comparing ASAS reflectance and derived directional-hemispherical reflectance with simulations of a millet canopy made with a complex geometric canopy reflectance model, the Botanical Plant Modelling System (BPMS. Both were found to agree well in magnitude. Broadband albedo values derived from the ASAS data were compared with ground-based (point sample albedo measurements and found to agree extremely well. These results indicate that the linear kernel-driven modelling approach, which is to be used operationally to produce global 16 day, 1 km albedo maps from forthcoming NASA Earth Observing System spaceborne data, is both sound and practical for the estimation of angle-integrated spectral reflectance quantities related to albedo. Results for broadband albedo are dependent on spectral sampling and on obtaining the correct spectral weigthings.

  16. Protein Profiles Reveal Diverse Responsive Signaling Pathways in Kernels of Two Maize Inbred Lines with Contrasting Drought Sensitivity

    Liming Yang

    2014-10-01

    Full Text Available Drought stress is a major factor that contributes to disease susceptibility and yield loss in agricultural crops. To identify drought responsive proteins and explore metabolic pathways involved in maize tolerance to drought stress, two maize lines (B73 and Lo964 with contrasting drought sensitivity were examined. The treatments of drought and well water were applied at 14 days after pollination (DAP, and protein profiles were investigated in developing kernels (35 DAP using iTRAQ (isobaric tags for relative and absolute quantitation. Proteomic analysis showed that 70 and 36 proteins were significantly altered in their expression under drought treatments in B73 and Lo964, respectively. The numbers and levels of differentially expressed proteins were generally higher in the sensitive genotype, B73, implying an increased sensitivity to drought given the function of the observed differentially expressed proteins, such as redox homeostasis, cell rescue/defense, hormone regulation and protein biosynthesis and degradation. Lo964 possessed a more stable status with fewer differentially expressed proteins. However, B73 seems to rapidly initiate signaling pathways in response to drought through adjusting diverse defense pathways. These changes in protein expression allow for the production of a drought stress-responsive network in maize kernels.

  17. The use of kernel local Fisher discriminant analysis for the channelization of the Hotelling model observer

    Wen, Gezheng; Markey, Mia K.

    2015-03-01

    It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.

  18. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Modeling of convective drying kinetics of Pistachio kernels in a fixed bed drying system

    Balbay Asım

    2013-01-01

    Full Text Available Drying kinetics of Pistachio kernels (PKs with initial moisture content of 32.4% (w.b was investigated as a function of drying conditions in a fixed bed drying system. The drying experiments were carried out at different temperatures of drying air (40, 60 and 80°C and air velocities (0.05, 0.075 and 0.1 m/s. Several experiments were performed in terms of mass of PKs (15g and 30g using a constant air velocity of 0.075 m/s. The fit quality of models was evaluated using the determination coefficient (R2, sum square error (SSE and root mean square error (RMSE. Among the selected models, the Midilli et al model was found to be the best models for describing the drying behavior of PKs. The activation energies were calculated as 29.2 kJ/mol and effective diffusivity values were calculated between 1.38 and 4.94x10-10 m2/s depending on air temperatures.

  20. Integrated model of multiple kernel learning and differential evolution for EUR/USD trading.

    Deng, Shangkun; Sakurai, Akito

    2014-01-01

    Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.

  1. Integrated Model of Multiple Kernel Learning and Differential Evolution for EUR/USD Trading

    Shangkun Deng

    2014-01-01

    Full Text Available Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL with differential evolution (DE for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI, while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.

  2. The definition of kernel Oz

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  3. Development of a Modified Kernel Regression Model for a Robust Signal Reconstruction

    Ahmed, Ibrahim; Heo, Gyunyoung [Kyung Hee University, Yongin (Korea, Republic of)

    2016-10-15

    The demand for robust and resilient performance has led to the use of online-monitoring techniques to monitor the process parameters and signal validation. On-line monitoring and signal validation techniques are the two important terminologies in process and equipment monitoring. These techniques are automated methods of monitoring instrument performance while the plant is operating. To implementing these techniques, several empirical models are used. One of these models is nonparametric regression model, otherwise known as kernel regression (KR). Unlike parametric models, KR is an algorithmic estimation procedure which assumes no significant parameters, and it needs no training process after its development when new observations are prepared; which is good for a system characteristic of changing due to ageing phenomenon. Although KR is used and performed excellently when applied to steady state or normal operating data, it has limitation in time-varying data that has several repetition of the same signal, especially if those signals are used to infer the other signals. The convectional KR has limitation in correctly estimating the dependent variable when time-varying data with repeated values are used to estimate the dependent variable especially in signal validation and monitoring. Therefore, we presented here in this work a modified KR that can resolve this issue which can also be feasible in time domain. Data are first transformed prior to the Euclidian distance evaluation considering their slopes/changes with respect to time. The performance of the developed model is evaluated and compared with that of conventional KR using both the lab experimental data and the real time data from CNS provided by KAERI. The result shows that the proposed developed model, having demonstrated high performance accuracy than that of conventional KR, is capable of resolving the identified limitation with convectional KR. We also discovered that there is still need to further

  4. Geographically Weighted Regression Model with Kernel Bisquare and Tricube Weighted Function on Poverty Percentage Data in Central Java Province

    Nugroho, N. F. T. A.; Slamet, I.

    2018-05-01

    Poverty is a socio-economic condition of a person or group of people who can not fulfil their basic need to maintain and develop a dignified life. This problem still cannot be solved completely in Central Java Province. Currently, the percentage of poverty in Central Java is 13.32% which is higher than the national poverty rate which is 11.13%. In this research, data of percentage of poor people in Central Java Province has been analyzed through geographically weighted regression (GWR). The aim of this research is therefore to model poverty percentage data in Central Java Province using GWR with weighted function of kernel bisquare, and tricube. As the results, we obtained GWR model with bisquare and tricube kernel weighted function on poverty percentage data in Central Java province. From the GWR model, there are three categories of region which are influenced by different of significance factors.

  5. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  6. Adaptive metric kernel regression

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  7. Adaptive Metric Kernel Regression

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  8. Mixture Density Mercer Kernels: A Method to Learn Kernels

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  9. Kernel density surface modelling as a means to identify significant concentrations of vulnerable marine ecosystem indicators.

    Ellen Kenchington

    Full Text Available The United Nations General Assembly Resolution 61/105, concerning sustainable fisheries in the marine ecosystem, calls for the protection of vulnerable marine ecosystems (VME from destructive fishing practices. Subsequently, the Food and Agriculture Organization (FAO produced guidelines for identification of VME indicator species/taxa to assist in the implementation of the resolution, but recommended the development of case-specific operational definitions for their application. We applied kernel density estimation (KDE to research vessel trawl survey data from inside the fishing footprint of the Northwest Atlantic Fisheries Organization (NAFO Regulatory Area in the high seas of the northwest Atlantic to create biomass density surfaces for four VME indicator taxa: large-sized sponges, sea pens, small and large gorgonian corals. These VME indicator taxa were identified previously by NAFO using the fragility, life history characteristics and structural complexity criteria presented by FAO, along with an evaluation of their recovery trajectories. KDE, a non-parametric neighbour-based smoothing function, has been used previously in ecology to identify hotspots, that is, areas of relatively high biomass/abundance. We present a novel approach of examining relative changes in area under polygons created from encircling successive biomass categories on the KDE surface to identify "significant concentrations" of biomass, which we equate to VMEs. This allows identification of the VMEs from the broader distribution of the species in the study area. We provide independent assessments of the VMEs so identified using underwater images, benthic sampling with other gear types (dredges, cores, and/or published species distribution models of probability of occurrence, as available. For each VME indicator taxon we provide a brief review of their ecological function which will be important in future assessments of significant adverse impact on these habitats here

  10. Continuous spin mean-field models : Limiting kernels and Gibbs properties of local transforms

    Kulske, Christof; Opoku, Alex A.

    2008-01-01

    We extend the notion of Gibbsianness for mean-field systems to the setup of general (possibly continuous) local state spaces. We investigate the Gibbs properties of systems arising from an initial mean-field Gibbs measure by application of given local transition kernels. This generalizes previous

  11. Short-term traffic flow prediction model using particle swarm optimization–based combined kernel function-least squares support vector machine combined with chaos theory

    Qiang Shang

    2016-08-01

    Full Text Available Short-term traffic flow prediction is an important part of intelligent transportation systems research and applications. For further improving the accuracy of short-time traffic flow prediction, a novel hybrid prediction model (multivariate phase space reconstruction–combined kernel function-least squares support vector machine based on multivariate phase space reconstruction and combined kernel function-least squares support vector machine is proposed. The C-C method is used to determine the optimal time delay and the optimal embedding dimension of traffic variables’ (flow, speed, and occupancy time series for phase space reconstruction. The G-P method is selected to calculate the correlation dimension of attractor which is an important index for judging chaotic characteristics of the traffic variables’ series. The optimal input form of combined kernel function-least squares support vector machine model is determined by multivariate phase space reconstruction, and the model’s parameters are optimized by particle swarm optimization algorithm. Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. The experimental results suggest that the new proposed model yields better predictions compared with similar models (combined kernel function-least squares support vector machine, multivariate phase space reconstruction–generalized kernel function-least squares support vector machine, and phase space reconstruction–combined kernel function-least squares support vector machine, which indicates that the new proposed model exhibits stronger prediction ability and robustness.

  12. PALM KERNEL OIL SOLUBITY EXAMINATION AND ITS MODELING IN EXTRACTION PROCESS USING SUPERCRITICAL CARBON DIOXIDE

    Wahyu Bahari Setianto

    2013-11-01

    Full Text Available Application of  supercritical carbon dioxide (SC-CO2 to vegetable oil extraction became an attractive technique due to its high solubility, short extraction time and simple purification. The method is considered as earth friendly technology due to the absence of chemical usage. Solubility of solute-SC-CO2 is an important data for application of the SC-CO2 extraction. In this work, the equilibrium solubility of the palm kernel oil (PKO in SC-CO2 has been examined using extraction curve analysis. The examinations were performed at temperature and pressure ranges of  323.15 K to 353.15 K and 20.7 to 34.5 MPa respectively. It was obtained that the experimental solubility were from 0.0160 to 0.0503 g oil/g CO2 depend on the extraction condition. The experimental solubility data was well correlated with a solvent density based model with absolute percent deviation of 0.96. PENENTUAN KELARUTAN MINYAK INTI KELAPA SAWIT DAN PEMODELAN EKSTRAKSI DENGAN KARBON DIOKSIDA SUPERKRITIK. Sehubungan dengan kelarutan yang tinggi, waktu ekstraksi yang pendek dan pemurnian hasil yang mudah, aplikasi karbon dioksida superkritis (SC-CO2 pada ekstraksi minyak nabati menjadi sebuah teknik ekstraksi yang menarik. Karena tanpa penggunaan bahan kimia, metode ekstraksi ini dianggap sebagai teknologi yang ramah lingkungan. Kelarutan zat terlarut pada SC-CO2 merupakan data yang penting dalam aplikasi SC-CO2 pada proses ekstraksi.  Pada penelitian ini,  kelarutan kesetimbangan dari minyak biji sawit (PKO dalam SC-CO2 telah diuji dengan mengunakan analisa kurva proses ekstraksi. Pengujian kelarutan tersebut dilakukan pada rentang suhu 323,15 K sampai 353,15 K dan rentang tekanan 20,7 MPa sampai 34,5 MPa. Hasil analisa menunjukkan bahwa kelarutan kesetimbangan hasil percobaan  PKO pada SC-CO2 adalah 0.0160 g minyak/g CO2 sampai 0,0503 g minyak/g CO2 tergantung pada kondisi ekstraksi. Data kelarutan kesetimbangan hasil percobaan  telah dikorelasaikan dengan baik menggunakan

  13. Credit scoring analysis using kernel discriminant

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  14. Free Energy Contribution Analysis Using Response Kernel Approximation: Insights into the Acylation Reaction of a Beta-Lactamase.

    Asada, Toshio; Ando, Kanta; Bandyopadhyay, Pradipta; Koseki, Shiro

    2016-09-08

    A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design.

  15. Experimental investigation and phenomenological model development of flame kernel growth rate in a gasoline fuelled spark ignition engine

    Salvi, B.L.; Subramanian, K.A.

    2015-01-01

    Highlights: • Experimental measurement of the flame kernel growth rate (FKGR) in SI engine. • FKGR is the highest at MBT timing as compared with retarded and advanced timings. • FKGR decreases with increase in engine speed. • FKGR is correlated with equivalence ratio, charge density, in-cylinder pressure and engine speed. - Abstract: As flame kernel growth plays a major role in combustion of premixed-charge in spark ignition engines for higher energy efficiency and less emission, the experimental study was carried out on a single cylinder spark ignition research engine for measurement of flame kernel growth rate (FKGR) using spark plug fibre optics probe (VisioFlame sensor). The FKGR was measured on the engine at different power output with varied spark ignition timings and different engine speeds. The experimental results indicate that the FKGR was the highest with the maximum brake torque (MBT) spark timing and it decreases with increase in the engine speed. The FKGR at engine speed of 1000 RPM was the highest of 1.81 m/s with MBT timing (20° bTDC) as compared to 1.6 m/s (15° bTDC), 1.67 m/s (25° bTDC), and 1.61 m/s (30° bTDC) with retarded and advanced timing. In addition to this, a phenomenological model was developed for calculation of FKGR. It was observed from the model that FKGR is function of equivalence ratio, engine speed, in-cylinder pressure and charge density. The experimental results and methodology emerged from this study would be useful for optimization of engine parameters using the FKGR and also further development of model for alternative fuels

  16. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference

  17. Topics in bound-state dynamical processes: semiclassical eigenvalues, reactive scattering kernels and gas-surface scattering models

    Adams, J.E.

    1979-05-01

    The difficulty of applying the WKB approximation to problems involving arbitrary potentials has been confronted. Recent work has produced a convenient expression for the potential correction term. However, this approach does not yield a unique correction term and hence cannot be used to construct the proper modification. An attempt is made to overcome the uniqueness difficulties by imposing a criterion which permits identification of the correct modification. Sections of this work are: semiclassical eigenvalues for potentials defined on a finite interval; reactive scattering exchange kernels; a unified model for elastic and inelastic scattering from a solid surface; and selective absorption on a solid surface

  18. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-01

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  19. Multivariate and semiparametric kernel regression

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  20. Notes on the gamma kernel

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  1. A new discrete dipole kernel for quantitative susceptibility mapping.

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Modeling electro-magneto-hydrodynamic thermo-fluidic transport of biofluids with new trend of fractional derivative without singular kernel

    Abdulhameed, M.; Vieru, D.; Roslan, R.

    2017-10-01

    This paper investigates the electro-magneto-hydrodynamic flow of the non-Newtonian behavior of biofluids, with heat transfer, through a cylindrical microchannel. The fluid is acted by an arbitrary time-dependent pressure gradient, an external electric field and an external magnetic field. The governing equations are considered as fractional partial differential equations based on the Caputo-Fabrizio time-fractional derivatives without singular kernel. The usefulness of fractional calculus to study fluid flows or heat and mass transfer phenomena was proven. Several experimental measurements led to conclusion that, in such problems, the models described by fractional differential equations are more suitable. The most common time-fractional derivative used in Continuum Mechanics is Caputo derivative. However, two disadvantages appear when this derivative is used. First, the definition kernel is a singular function and, secondly, the analytical expressions of the problem solutions are expressed by generalized functions (Mittag-Leffler, Lorenzo-Hartley, Robotnov, etc.) which, generally, are not adequate to numerical calculations. The new time-fractional derivative Caputo-Fabrizio, without singular kernel, is more suitable to solve various theoretical and practical problems which involve fractional differential equations. Using the Caputo-Fabrizio derivative, calculations are simpler and, the obtained solutions are expressed by elementary functions. Analytical solutions of the biofluid velocity and thermal transport are obtained by means of the Laplace and finite Hankel transforms. The influence of the fractional parameter, Eckert number and Joule heating parameter on the biofluid velocity and thermal transport are numerically analyzed and graphic presented. This fact can be an important in Biochip technology, thus making it possible to use this analysis technique extremely effective to control bioliquid samples of nanovolumes in microfluidic devices used for biological

  3. A kernel principal component analysis–based degradation model and remaining useful life estimation for the turbofan engine

    Delong Feng

    2016-05-01

    Full Text Available Remaining useful life estimation of the prognostics and health management technique is a complicated and difficult research question for maintenance. In this article, we consider the problem of prognostics modeling and estimation of the turbofan engine under complicated circumstances and propose a kernel principal component analysis–based degradation model and remaining useful life estimation method for such aircraft engine. We first analyze the output data created by the turbofan engine thermodynamic simulation that is based on the kernel principal component analysis method and then distinguish the qualitative and quantitative relationships between the key factors. Next, we build a degradation model for the engine fault based on the following assumptions: the engine has only had constant failure (i.e. no sudden failure is included, and the engine has a Wiener process, which is a covariate stand for the engine system drift. To predict the remaining useful life of the turbofan engine, we built a health index based on the degradation model and used the method of maximum likelihood and the data from the thermodynamic simulation model to estimate the parameters of this degradation model. Through the data analysis, we obtained a trend model of the regression curve line that fits with the actual statistical data. Based on the predicted health index model and the data trend model, we estimate the remaining useful life of the aircraft engine as the index reaches zero. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this prediction method that we propose. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this proposed method, the precision of the method can reach to 98.9% and the average precision is 95.8%.

  4. Modeling and optimization by particle swarm embedded neural network for adsorption of zinc (II) by palm kernel shell based activated carbon from aqueous environment.

    Karri, Rama Rao; Sahu, J N

    2018-01-15

    Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Particle models for discrete element modeling of bulk grain properties of wheat kernels

    Recent research has shown the potential of discrete element method (DEM) in simulating grain flow in bulk handling systems. Research has also revealed that simulation of grain flow with DEM requires establishment of appropriate particle models for each grain type. This research completes the three-p...

  6. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  7. Fault Detection for Shipboard Monitoring – Volterra Kernel and Hammerstein Model Approaches

    Lajic, Zoran; Blanke, Mogens; Nielsen, Ulrik Dam

    2009-01-01

    In this paper nonlinear fault detection for in-service monitoring and decision support systems for ships will be presented. The ship is described as a nonlinear system, and the stochastic wave elevation and the associated ship responses are conveniently modelled in frequency domain. The transform....... The transformation from time domain to frequency domain has been conducted by use of Volterra theory. The paper takes as an example fault detection of a containership on which a decision support system has been installed....

  8. Relativistic four-component calculations of indirect nuclear spin-spin couplings with efficient evaluation of the exchange-correlation response kernel

    Křístková, Anežka; Malkin, Vladimir G. [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, SK-84536 Bratislava (Slovakia); Komorovsky, Stanislav; Repisky, Michal [Centre for Theoretical and Computational Chemistry, University of Tromsø - The Arctic University of Norway, N-9037 Tromsø (Norway); Malkina, Olga L., E-mail: olga.malkin@savba.sk [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, SK-84536 Bratislava (Slovakia); Department of Inorganic Chemistry, Comenius University, Bratislava (Slovakia)

    2015-03-21

    In this work, we report on the development and implementation of a new scheme for efficient calculation of indirect nuclear spin-spin couplings in the framework of four-component matrix Dirac-Kohn-Sham approach termed matrix Dirac-Kohn-Sham restricted magnetic balance resolution of identity for J and K, which takes advantage of the previous restricted magnetic balance formalism and the density fitting approach for the rapid evaluation of density functional theory exchange-correlation response kernels. The new approach is aimed to speedup the bottleneck in the solution of the coupled perturbed equations: evaluation of the matrix elements of the kernel of the exchange-correlation potential. The performance of the new scheme has been tested on a representative set of indirect nuclear spin-spin couplings. The obtained results have been compared with the corresponding results of the reference method with traditional evaluation of the exchange-correlation kernel, i.e., without employing the fitted electron densities. Overall good agreement between both methods was observed, though the new approach tends to give values by about 4%-5% higher than the reference method. On the average, the solution of the coupled perturbed equations with the new scheme is about 8.5 times faster compared to the reference method.

  9. Defense Responses to Mycotoxin-Producing Fungi Fusarium proliferatum, F. subglutinans, and Aspergillus flavus in Kernels of Susceptible and Resistant Maize Genotypes.

    Lanubile, Alessandra; Maschietto, Valentina; De Leonardis, Silvana; Battilani, Paola; Paciolla, Costantino; Marocco, Adriano

    2015-05-01

    Developing kernels of resistant and susceptible maize genotypes were inoculated with Fusarium proliferatum, F. subglutinans, and Aspergillus flavus. Selected defense systems were investigated using real-time reverse transcription-polymerase chain reaction to monitor the expression of pathogenesis-related (PR) genes (PR1, PR5, PRm3, PRm6) and genes protective from oxidative stress (peroxidase, catalase, superoxide dismutase and ascorbate peroxidase) at 72 h postinoculation. The study was also extended to the analysis of the ascorbate-glutathione cycle and catalase, superoxide dismutase, and cytosolic and wall peroxidases enzymes. Furthermore, the hydrogen peroxide and malondialdehyde contents were studied to evaluate the oxidation level. Higher gene expression and enzymatic activities were observed in uninoculated kernels of resistant line, conferring a major readiness to the pathogen attack. Moreover expression values of PR genes remained higher in the resistant line after inoculation, demonstrating a potentiated response to the pathogen invasions. In contrast, reactive oxygen species-scavenging genes were strongly induced in the susceptible line only after pathogen inoculation, although their enzymatic activity was higher in the resistant line. Our data provide an important basis for further investigation of defense gene functions in developing kernels in order to improve resistance to fungal pathogens. Maize genotypes with overexpressed resistance traits could be profitably utilized in breeding programs focused on resistance to pathogens and grain safety.

  10. Relativistic four-component calculations of indirect nuclear spin-spin couplings with efficient evaluation of the exchange-correlation response kernel

    Křístková, Anežka; Malkin, Vladimir G.; Komorovsky, Stanislav; Repisky, Michal; Malkina, Olga L.

    2015-01-01

    In this work, we report on the development and implementation of a new scheme for efficient calculation of indirect nuclear spin-spin couplings in the framework of four-component matrix Dirac-Kohn-Sham approach termed matrix Dirac-Kohn-Sham restricted magnetic balance resolution of identity for J and K, which takes advantage of the previous restricted magnetic balance formalism and the density fitting approach for the rapid evaluation of density functional theory exchange-correlation response kernels. The new approach is aimed to speedup the bottleneck in the solution of the coupled perturbed equations: evaluation of the matrix elements of the kernel of the exchange-correlation potential. The performance of the new scheme has been tested on a representative set of indirect nuclear spin-spin couplings. The obtained results have been compared with the corresponding results of the reference method with traditional evaluation of the exchange-correlation kernel, i.e., without employing the fitted electron densities. Overall good agreement between both methods was observed, though the new approach tends to give values by about 4%-5% higher than the reference method. On the average, the solution of the coupled perturbed equations with the new scheme is about 8.5 times faster compared to the reference method

  11. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    Qiang Shang

    Full Text Available Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS. Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM is proposed based on singular spectrum analysis (SSA and kernel extreme learning machine (KELM. SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA. Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  12. Influence of differently processed mango seed kernel meal on ...

    Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.

  13. Mixed kernel function support vector regression for global sensitivity analysis

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  14. Partial Deconvolution with Inaccurate Blur Kernel.

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  15. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    Bordes, Julien, E-mail: julien.bordes@inserm.fr [CRCT, UMR 1037 INSERM, Université Paul Sabatier, F-31037 Toulouse (France); UMR 1037, CRCT, Université Toulouse III-Paul Sabatier, F-31037 (France); Incerti, Sébastien, E-mail: incerti@cenbg.in2p3.fr [Université de Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Lampe, Nathanael, E-mail: nathanael.lampe@gmail.com [Université de Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Bardiès, Manuel, E-mail: manuel.bardies@inserm.fr [CRCT, UMR 1037 INSERM, Université Paul Sabatier, F-31037 Toulouse (France); UMR 1037, CRCT, Université Toulouse III-Paul Sabatier, F-31037 (France); Bordage, Marie-Claude, E-mail: marie-claude.bordage@inserm.fr [CRCT, UMR 1037 INSERM, Université Paul Sabatier, F-31037 Toulouse (France); UMR 1037, CRCT, Université Toulouse III-Paul Sabatier, F-31037 (France)

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (“option 2” and its improved version, “option 4”). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as “Geant4-DNA-CPA100”. In this study, “Geant4-DNA-CPA100” was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (“option 2” and “option 4”), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with “Geant4-DNA-CPA100” – the first set using Geant4′s default settings, and the second using CPA100′s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA’s existing models were always broader than those generated with “Geant4-DNA-CPA100”. The discrepancies observed between the DPKs generated using Geant4-DNA’s existing models and “Geant4-DNA-CPA100” were

  16. Incorporating temporal variation in seabird telemetry data: time variant kernel density models

    Gilbert, Andrew; Adams, Evan M.; Anderson, Carl; Berlin, Alicia; Bowman, Timothy D.; Connelly, Emily; Gilliland, Scott; Gray, Carrie E.; Lepage, Christine; Meattey, Dustin; Montevecchi, William; Osenkowski, Jason; Savoy, Lucas; Stenhouse, Iain; Williams, Kathryn

    2015-01-01

    A key component of the Mid-Atlantic Baseline Studies project was tracking the individual movements of focal marine bird species (Red-throated Loon [Gavia stellata], Northern Gannet [Morus bassanus], and Surf Scoter [Melanitta perspicillata]) through the use of satellite telemetry. This element of the project was a collaborative effort with the Department of Energy (DOE), Bureau of Ocean Energy Management (BOEM), the U.S. Fish and Wildlife Service (USFWS), and Sea Duck Joint Venture (SDJV), among other organizations. Satellite telemetry is an effective and informative tool for understanding individual animal movement patterns, allowing researchers to mark an individual once, and thereafter follow the movements of the animal in space and time. Aggregating telemetry data from multiple individuals can provide information about the spatial use and temporal movements of populations. Tracking data is three dimensional, with the first two dimensions, X and Y, ordered along the third dimension, time. GIS software has many capabilities to store, analyze and visualize the location information, but little or no support for visualizing the temporal data, and tools for processing temporal data are lacking. We explored several ways of analyzing the movement patterns using the spatiotemporal data provided by satellite tags. Here, we present the results of one promising method: time-variant kernel density analysis (Keating and Cherry, 2009). The goal of this chapter is to demonstrate new methods in spatial analysis to visualize and interpret tracking data for a large number of individual birds across time in the mid-Atlantic study area and beyond. In this chapter, we placed greater emphasis on analytical methods than on the behavior and ecology of the animals tracked. For more detailed examinations of the ecology and wintering habitat use of the focal species in the midAtlantic, see Chapters 20-22.

  17. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  18. Panel data specifications in nonparametric kernel regression

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  19. Ranking Support Vector Machine with Kernel Approximation

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  20. Ranking Support Vector Machine with Kernel Approximation.

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  1. Nonparametric NAR-ARCH Modelling of Stock Prices by the Kernel Methodology

    Mohamed Chikhi

    2018-02-01

    Full Text Available This paper analyses cyclical behaviour of Orange stock price listed in French stock exchange over 01/03/2000 to 02/02/2017 by testing the nonlinearities through a class of conditional heteroscedastic nonparametric models. The linearity and Gaussianity assumptions are rejected for Orange Stock returns and informational shocks have transitory effects on returns and volatility. The forecasting results show that Orange stock prices are short-term predictable and nonparametric NAR-ARCH model has better performance over parametric MA-APARCH model for short horizons. Plus, the estimates of this model are also better comparing to the predictions of the random walk model. This finding provides evidence for weak form of inefficiency in Paris stock market with limited rationality, thus it emerges arbitrage opportunities.

  2. Metabolic network prediction through pairwise rational kernels.

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  3. Determination of pyrolysis characteristics and kinetics of palm kernel shell using TGA–FTIR and model-free integral methods

    Ma, Zhongqing; Chen, Dengyu; Gu, Jie; Bao, Binfu; Zhang, Qisheng

    2015-01-01

    Highlights: • Model-free integral kinetics method and analytical TGA–FTIR were conducted on pyrolysis process of PKS. • The pyrolysis mechanism of PKS was elaborated. • Thermal stability was established: lignin > cellulose > xylan. • Detailed compositions in the volatiles of PKS pyrolysis were determinated. • The interaction of biomass three components led to the fluctuation of activation energy in PKS pyrolysis. - Abstract: Palm kernel shell (PKS) from palm oil production is a potential biomass source for bio-energy production. A fundamental understanding of PKS pyrolysis behavior and kinetics is essential to its efficient thermochemical conversion. The thermal degradation profile in derivative thermogravimetry (DTG) analysis shown two significant mass-loss peaks mainly related to the decomposition of hemicellulose and cellulose respectively. This characteristic differentiated with other biomass (e.g. wheat straw and corn stover) presented just one peak or accompanied with an extra “shoulder” peak (e.g. wheat straw). According to the Fourier transform infrared spectrometry (FTIR) analysis, the prominent volatile components generated by the pyrolysis of PKS were CO 2 (2400–2250 cm −1 and 586–726 cm −1 ), aldehydes, ketones, organic acids (1900–1650 cm −1 ), and alkanes, phenols (1475–1000 cm −1 ). The activation energy dependent on the conversion rate was estimated by two model-free integral methods: Flynn–Wall–Ozawa (FWO) and Kissinger–Akahira–Sunose (KAS) method at different heating rates. The fluctuation of activation energy can be interpreted as a result of interactive reactions related to cellulose, hemicellulose and lignin degradation, occurred in the pyrolysis process. Based on TGA–FTIR analysis and model free integral kinetics method, the pyrolysis mechanism of PKS was elaborated in this paper

  4. PENGEMBANGAN MODEL SUPPORT VECTOR MACHINES (SVM DENGAN MEMPERBANYAK DATASET UNTUK PREDIKSI BISNIS FOREX MENGGUNAKAN METODE KERNEL TRICK

    adi sucipto

    2017-09-01

    Full Text Available There are many types of investments that can be used to generate income, such as in the form of land, houses, gold, precious metals etc., there are also in the form of financial assets such as stocks, mutual funds, bonds and money markets or capital markets. One of the investments that attract enough attention today is the capital market investment. The purpose of this study is to predict and improve the accuracy of foreign exchange rates on forex business by using the Support Vector Machine model as a model for predicting and using more data sets compared with previous research that is as many as 1558 dataset. This study uses currency exchange rate data obtained from PT. Best Profit Future Cab. Surabaya is already in the form of data consisting of open, high, low, close attributes by using the current data of Euro currency exchange rate to USA Dollar with period every 1 minutes from May 12, 2016 at 09.51 until 13 May 2016 at 12:30 As much as 1689 dataset, After conducting research using Support Vector Machine model with kernel trick method to predict Forex using current data of Euro exchange rate to USA Dollar with period every 1 minutes from May 12, 2016 at 09.51 until 13 May 2016 at 12:30 as much as 1689 The dataset yielded a considerable prediction accuracy of 97.86%, with this considerable accuracy indicating that the movement of the Euro currency exchange rate to the USA Dollar on May 12 to May 13, 2016 can be predicted precisely.

  5. Modeling and Simulation on NOx and N2O Formation in Co-combustion of Low-rank Coal and Palm Kernel Shell

    Mahidin Mahidin

    2012-12-01

    Full Text Available NOx and N2O emissions from coal combustion are claimed as the major contributors for the acid rain, photochemical smog, green house and ozone depletion problems. Based on the facts, study on those emissions formation is interest topic in the combustion area. In this paper, theoretical study by modeling and simulation on NOx and N2O formation in co-combustion of low-rank coal and palm kernel shell has been done. Combustion model was developed by using the principle of chemical-reaction equilibrium. Simulation on the model in order to evaluate the composition of the flue gas was performed by minimization the Gibbs free energy. The results showed that by introduced of biomass in coal combustion can reduce the NOx concentration in considerably level. Maximum NO level in co-combustion of low-rank coal and palm kernel shell with fuel composition 1:1 is 2,350 ppm, low enough compared to single low-rank coal combustion up to 3,150 ppm. Moreover, N2O is less than 0.25 ppm in all cases. Keywords: low-rank coal, N2O emission, NOx emission, palm kernel shell

  6. The Rapid Evaluation of Mean Concentration Fields in Lagrangian Stochastic Modelling Using a Density Kernel Estimator

    Shao, Y

    2004-01-01

    Lagrangian Stochastic (LS) particle models have proven to be a useful computational tool for the description and prediction of dispersion of pollutant releases in complex meteorological situations (e.g...

  7. Optimized Kernel Entropy Components.

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  8. Subsampling Realised Kernels

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  9. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  10. Robust Building Energy Load Forecasting Using Physically-Based Kernel Models

    Anand Krishnan Prakash

    2018-04-01

    Full Text Available Robust and accurate building energy load forecasting is important for helping building managers and utilities to plan, budget, and strategize energy resources in advance. With recent prevalent adoption of smart-meters in buildings, a significant amount of building energy consumption data became available. Many studies have developed physics-based white box models and data-driven black box models to predict building energy consumption; however, they require extensive prior knowledge about building system, need a large set of training data, or lack robustness to different forecasting scenarios. In this paper, we introduce a new building energy forecasting method based on Gaussian Process Regression (GPR that incorporates physical insights about load data characteristics to improve accuracy while reducing training requirements. The GPR is a non-parametric regression method that models the data as a joint Gaussian distribution with mean and covariance functions and forecast using the Bayesian updating. We model the covariance function of the GPR to reflect the data patterns in different forecasting horizon scenarios, as prior knowledge. Our method takes advantage of the modeling flexibility and computational efficiency of the GPR while benefiting from the physical insights to further improve the training efficiency and accuracy. We evaluate our method with three field datasets from two university campuses (Carnegie Mellon University and Stanford University for both short- and long-term load forecasting. The results show that our method performs more accurately, especially when the training dataset is small, compared to other state-of-the-art forecasting models (up to 2.95 times smaller prediction error.

  11. Analytic solution of vector model kinetic equations with constant kernel and their applications

    Latyshev, A.V.

    1993-01-01

    For the first time exact solutions the heif-space boundary value problems for model kinetic equations is obtained. Here x > 0, μ is an element of (-∞, 0) union (0, +∞), Σ = diag {σ 1 , σ 2 }, C = [c ij ] - 2 x 2-matrix, Ψ (x, μ) is vector-column with elements ψ 1 and ψ 2 . Exact solution of the diffusion slip flow of the binary gas mixture as a application for the model Boltzmann equation with collision operator in the McCormack's form is found. 18 refs

  12. An Iterative Interplanetary Scintillation (IPS) Analysis Using Time-dependent 3-D MHD Models as Kernels

    Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.

    2017-12-01

    The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.

  13. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the

  14. Functional Brain Imaging Synthesis Based on Image Decomposition and Kernel Modeling: Application to Neurodegenerative Diseases

    Francisco J. Martinez-Murcia

    2017-11-01

    Full Text Available The rise of neuroimaging in research and clinical practice, together with the development of new machine learning techniques has strongly encouraged the Computer Aided Diagnosis (CAD of different diseases and disorders. However, these algorithms are often tested in proprietary datasets to which the access is limited and, therefore, a direct comparison between CAD procedures is not possible. Furthermore, the sample size is often small for developing accurate machine learning methods. Multi-center initiatives are currently a very useful, although limited, tool in the recruitment of large populations and standardization of CAD evaluation. Conversely, we propose a brain image synthesis procedure intended to generate a new image set that share characteristics with an original one. Our system focuses on nuclear imaging modalities such as PET or SPECT brain images. We analyze the dataset by applying PCA to the original dataset, and then model the distribution of samples in the projected eigenbrain space using a Probability Density Function (PDF estimator. Once the model has been built, we can generate new coordinates on the eigenbrain space belonging to the same class, which can be then projected back to the image space. The system has been evaluated on different functional neuroimaging datasets assessing the: resemblance of the synthetic images with the original ones, the differences between them, their generalization ability and the independence of the synthetic dataset with respect to the original. The synthetic images maintain the differences between groups found at the original dataset, with no significant differences when comparing them to real-world samples. Furthermore, they featured a similar performance and generalization capability to that of the original dataset. These results prove that these images are suitable for standardizing the evaluation of CAD pipelines, and providing data augmentation in machine learning systems -e.g. in deep

  15. Iterative software kernels

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  16. Putting Priors in Mixture Density Mercer Kernels

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  17. Classification With Truncated Distance Kernel.

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  18. Quantal Response: Nonparametric Modeling

    2017-01-01

    capture the behavior of observed phenomena. Higher-order polynomial and finite-dimensional spline basis models allow for more complicated responses as the...flexibility as these are nonparametric (not constrained to any particular functional form). These should be useful in identifying nonstandard behavior via... deviance ∆ = −2 log(Lreduced/Lfull) is defined in terms of the likelihood function L. For normal error, Lfull = 1, and based on Eq. A-2, we have log

  19. Polytomous diagnosis of ovarian tumors as benign, borderline, primary invasive or metastatic: development and validation of standard and kernel-based risk prediction models

    Testa Antonia C

    2010-10-01

    Full Text Available Abstract Background Hitherto, risk prediction models for preoperative ultrasound-based diagnosis of ovarian tumors were dichotomous (benign versus malignant. We develop and validate polytomous models (models that predict more than two events to diagnose ovarian tumors as benign, borderline, primary invasive or metastatic invasive. The main focus is on how different types of models perform and compare. Methods A multi-center dataset containing 1066 women was used for model development and internal validation, whilst another multi-center dataset of 1938 women was used for temporal and external validation. Models were based on standard logistic regression and on penalized kernel-based algorithms (least squares support vector machines and kernel logistic regression. We used true polytomous models as well as combinations of dichotomous models based on the 'pairwise coupling' technique to produce polytomous risk estimates. Careful variable selection was performed, based largely on cross-validated c-index estimates. Model performance was assessed with the dichotomous c-index (i.e. the area under the ROC curve and a polytomous extension, and with calibration graphs. Results For all models, between 9 and 11 predictors were selected. Internal validation was successful with polytomous c-indexes between 0.64 and 0.69. For the best model dichotomous c-indexes were between 0.73 (primary invasive vs metastatic and 0.96 (borderline vs metastatic. On temporal and external validation, overall discrimination performance was good with polytomous c-indexes between 0.57 and 0.64. However, discrimination between primary and metastatic invasive tumors decreased to near random levels. Standard logistic regression performed well in comparison with advanced algorithms, and combining dichotomous models performed well in comparison with true polytomous models. The best model was a combination of dichotomous logistic regression models. This model is available online

  20. Kernels for structured data

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  1. Identification of Fusarium damaged wheat kernels using image analysis

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  2. Mechanical response of common millet (Panicum miliaceum) seeds under quasi-static compression: Experiments and modeling.

    Hasseldine, Benjamin P J; Gao, Chao; Collins, Joseph M; Jung, Hyun-Do; Jang, Tae-Sik; Song, Juha; Li, Yaning

    2017-09-01

    The common millet (Panicum miliaceum) seedcoat has a fascinating complex microstructure, with jigsaw puzzle-like epidermis cells articulated via wavy intercellular sutures to form a compact layer to protect the kernel inside. However, little research has been conducted on linking the microstructure details with the overall mechanical response of this interesting biological composite. To this end, an integrated experimental-numerical-analytical investigation was conducted to both characterize the microstructure and ascertain the microscale mechanical properties and to test the overall response of kernels and full seeds under macroscale quasi-static compression. Scanning electron microscopy (SEM) was utilized to examine the microstructure of the outer seedcoat and nanoindentation was performed to obtain the material properties of the seedcoat hard phase material. A multiscale computational strategy was applied to link the microstructure to the macroscale response of the seed. First, the effective anisotropic mechanical properties of the seedcoat were obtained from finite element (FE) simulations of a microscale representative volume element (RVE), which were further verified from sophisticated analytical models. Then, macroscale FE models of the individual kernel and full seed were developed. Good agreement between the compression experiments and FE simulations were obtained for both the kernel and the full seed. The results revealed the anisotropic property and the protective function of the seedcoat, and showed that the sutures of the seedcoat play an important role in transmitting and distributing loads in responding to external compression. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Locally linear approximation for Kernel methods : the Railway Kernel

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  4. Kernel Bayesian ART and ARTMAP.

    Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan

    2018-02-01

    Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Data-variant kernel analysis

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  6. TISK 1.0: An easy-to-use Python implementation of the time-invariant string kernel model of spoken word recognition.

    You, Heejo; Magnuson, James S

    2018-04-30

    This article describes a new Python distribution of TISK, the time-invariant string kernel model of spoken word recognition (Hannagan et al. in Frontiers in Psychology, 4, 563, 2013). TISK is an interactive-activation model similar to the TRACE model (McClelland & Elman in Cognitive Psychology, 18, 1-86, 1986), but TISK replaces most of TRACE's reduplicated, time-specific nodes with theoretically motivated time-invariant, open-diphone nodes. We discuss the utility of computational models as theory development tools, the relative merits of TISK as compared to other models, and the ways in which researchers might use this implementation to guide their own research and theory development. We describe a TISK model that includes features that facilitate in-line graphing of simulation results, integration with standard Python data formats, and graph and data export. The distribution can be downloaded from https://github.com/maglab-uconn/TISK1.0 .

  7. Digital signal processing with kernel methods

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  8. Kernel-based adaptive learning improves accuracy of glucose predictive modelling in type 1 diabetes: A proof-of-concept study.

    Georga, Eleni I; Principe, Jose C; Rizos, Evangelos C; Fotiadis, Dimitrios I

    2017-07-01

    This study aims at demonstrating the need for nonlinear recursive models to the identification and prediction of the dynamic glucose system in type 1 diabetes. Nonlinear regression is performed in a reproducing kernel Hilbert space, by the Approximate Linear Dependency Kernel Recursive Least Squares (KRLS-ALD) algorithm, such that a sparse model structure is accomplished. The method is evaluated on seven people with type 1 diabetes in free-living conditions, where a change in glycaemic dynamics is forced by increasing the level of physical activity in the middle of the observational period. The univariate input allows for short-term (≤30 min) predictions with KRLS-ALD reaching an average root mean square error of 15.22±5.95 mgdL -1 and an average time lag of 17.14±2.67 min for an horizon of 30 min. Its performance is considerably better than that of time-invariant (regularized) linear regression models.

  9. Carbon Dioxide Mediates the Response to Temperature and Water Activity Levels in Aspergillus flavus during Infection of Maize Kernels

    Matthew K. Gilbert

    2017-12-01

    Full Text Available Aspergillus flavus is a saprophytic fungus that may colonize several important crops, including cotton, maize, peanuts and tree nuts. Concomitant with A. flavus colonization is its potential to secrete mycotoxins, of which the most prominent is aflatoxin. Temperature, water activity (aw and carbon dioxide (CO2 are three environmental factors shown to influence the fungus-plant interaction, which are predicted to undergo significant changes in the next century. In this study, we used RNA sequencing to better understand the transcriptomic response of the fungus to aw, temperature, and elevated CO2 levels. We demonstrate that aflatoxin (AFB1 production on maize grain was altered by water availability, temperature and CO2. RNA-Sequencing data indicated that several genes, and in particular those involved in the biosynthesis of secondary metabolites, exhibit different responses to water availability or temperature stress depending on the atmospheric CO2 content. Other gene categories affected by CO2 levels alone (350 ppm vs. 1000 ppm at 30 °C/0.99 aw, included amino acid metabolism and folate biosynthesis. Finally, we identified two gene networks significantly influenced by changes in CO2 levels that contain several genes related to cellular replication and transcription. These results demonstrate that changes in atmospheric CO2 under climate change scenarios greatly influences the response of A. flavus to water and temperature when colonizing maize grain.

  10. Embedded real-time operating system micro kernel design

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  11. Parameter optimization in the regularized kernel minimum noise fraction transformation

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  12. Influence of increasing convolution kernel filtering on plaque imaging with multislice CT using an ex-vivo model of Coronary Angiography

    Cademartiri, Filippo; Mollet, Nico R.; Runza, Giuseppe

    2005-01-01

    Purpose. To assess the variability in attenuation of coronary plaques with multislice CT angiography (MSCT-CA) in an ex-vivo model with varying convolution kernels. Materials and methods. MSCT-CA (Sensation 16, Siemens) was performed in three ex-vivo left coronary arteries after instillation of contrast material solution (Iomeprol 400 mgI/ml, dilution: 1180). The specimens were placed in oil to simulate epicardial fat. Scan parameters: slices 16/0.75 mm, rotation time 375 ms, feed/rotation 3.0 mm, mAs 500, slice thickness 1 mm, and FOV 50 mm. Datasets were reconstructed using 4 different kernels (B30f-smooth, B36f-medium smooth, B46f medium, and B60f-sharp). Each scan was scored for the presence of plaques. Once a plaque was detected, the operator performed attenuation measurements (HU) in coronary lumen, oil, calcified and soft plaque tissue using the same settings in all datasets. The results were compared with T-test and correlated with Pearson's test. Results. Overall, 464 measurements were performed. Significant differences (p [it

  13. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  14. Aflatoxin contamination of developing corn kernels.

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  15. Modelling sequentially scored item responses

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  16. Realized kernels in practice

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  17. New numerical approximation of fractional derivative with non-local and non-singular kernel: Application to chaotic models

    Toufik, Mekkaoui; Atangana, Abdon

    2017-10-01

    Recently a new concept of fractional differentiation with non-local and non-singular kernel was introduced in order to extend the limitations of the conventional Riemann-Liouville and Caputo fractional derivatives. A new numerical scheme has been developed, in this paper, for the newly established fractional differentiation. We present in general the error analysis. The new numerical scheme was applied to solve linear and non-linear fractional differential equations. We do not need a predictor-corrector to have an efficient algorithm, in this method. The comparison of approximate and exact solutions leaves no doubt believing that, the new numerical scheme is very efficient and converges toward exact solution very rapidly.

  18. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  19. A Experimental Study of the Growth of Laser Spark and Electric Spark Ignited Flame Kernels.

    Ho, Chi Ming

    1995-01-01

    Better ignition sources are constantly in demand for enhancing the spark ignition in practical applications such as automotive and liquid rocket engines. In response to this practical challenge, the present experimental study was conducted with the major objective to obtain a better understanding on how spark formation and hence spark characteristics affect the flame kernel growth. Two laser sparks and one electric spark were studied in air, propane-air, propane -air-nitrogen, methane-air, and methane-oxygen mixtures that were initially at ambient pressure and temperature. The growth of the kernels was monitored by imaging the kernels with shadowgraph systems, and by imaging the planar laser -induced fluorescence of the hydroxyl radicals inside the kernels. Characteristic dimensions and kernel structures were obtained from these images. Since different energy transfer mechanisms are involved in the formation of a laser spark as compared to that of an electric spark; a laser spark is insensitive to changes in mixture ratio and mixture type, while an electric spark is sensitive to changes in both. The detailed structures of the kernels in air and propane-air mixtures primarily depend on the spark characteristics. But the combustion heat released rapidly in methane-oxygen mixtures significantly modifies the kernel structure. Uneven spark energy distribution causes remarkably asymmetric kernel structure. The breakdown energy of a spark creates a blast wave that shows good agreement with the numerical point blast solution, and a succeeding complex spark-induced flow that agrees reasonably well with a simple puff model. The transient growth rates of the propane-air, propane-air -nitrogen, and methane-air flame kernels can be interpreted in terms of spark effects, flame stretch, and preferential diffusion. For a given mixture, a spark with higher breakdown energy produces a greater and longer-lasting enhancing effect on the kernel growth rate. By comparing the growth

  20. Soft Sensing of Key State Variables in Fermentation Process Based on Relevance Vector Machine with Hybrid Kernel Function

    Xianglin ZHU

    2014-06-01

    Full Text Available To resolve the online detection difficulty of some important state variables in fermentation process with traditional instruments, a soft sensing modeling method based on relevance vector machine (RVM with a hybrid kernel function is presented. Based on the characteristic analysis of two commonly-used kernel functions, that is, local Gaussian kernel function and global polynomial kernel function, a hybrid kernel function combing merits of Gaussian kernel function and polynomial kernel function is constructed. To design optimal parameters of this kernel function, the particle swarm optimization (PSO algorithm is applied. The proposed modeling method is used to predict the value of cell concentration in the Lysine fermentation process. Simulation results show that the presented hybrid-kernel RVM model has a better accuracy and performance than the single kernel RVM model.

  1. Kernel learning at the first level of inference.

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Collision kernels in the eikonal approximation for Lennard-Jones interaction potential

    Zielinska, S.

    1985-03-01

    The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)

  3. Multivariate realised kernels

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  4. Kernel bundle EPDiff

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  5. Kernel structures for Clouds

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  6. Discrete non-parametric kernel estimation for global sensitivity analysis

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  7. Analog forecasting with dynamics-adapted kernels

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  8. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  9. Dose point kernels for beta-emitting radioisotopes

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  10. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  11. Viscosity kernel of molecular fluids

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  12. Capturing option anomalies with a variance-dependent pricing kernel

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  13. Bayesian Kernel Mixtures for Counts.

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  14. Variable Kernel Density Estimation

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  15. Steerability of Hermite Kernel

    Yang, Bo; Flusser, Jan; Suk, Tomáš

    2013-01-01

    Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf

  16. Intelligent Design of Metal Oxide Gas Sensor Arrays Using Reciprocal Kernel Support Vector Regression

    Dougherty, Andrew W.

    Metal oxides are a staple of the sensor industry. The combination of their sensitivity to a number of gases, and the electrical nature of their sensing mechanism, make the particularly attractive in solid state devices. The high temperature stability of the ceramic material also make them ideal for detecting combustion byproducts where exhaust temperatures can be high. However, problems do exist with metal oxide sensors. They are not very selective as they all tend to be sensitive to a number of reduction and oxidation reactions on the oxide's surface. This makes sensors with large numbers of sensors interesting to study as a method for introducing orthogonality to the system. Also, the sensors tend to suffer from long term drift for a number of reasons. In this thesis I will develop a system for intelligently modeling metal oxide sensors and determining their suitability for use in large arrays designed to analyze exhaust gas streams. It will introduce prior knowledge of the metal oxide sensors' response mechanisms in order to produce a response function for each sensor from sparse training data. The system will use the same technique to model and remove any long term drift from the sensor response. It will also provide an efficient means for determining the orthogonality of the sensor to determine whether they are useful in gas sensing arrays. The system is based on least squares support vector regression using the reciprocal kernel. The reciprocal kernel is introduced along with a method of optimizing the free parameters of the reciprocal kernel support vector machine. The reciprocal kernel is shown to be simpler and to perform better than an earlier kernel, the modified reciprocal kernel. Least squares support vector regression is chosen as it uses all of the training points and an emphasis was placed throughout this research for extracting the maximum information from very sparse data. The reciprocal kernel is shown to be effective in modeling the sensor

  17. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  18. Integration of multi-criteria and nearest neighbour analysis with kernel density functions for improving sinkhole susceptibility models: the case study of Enemonzo (NE Italy

    Chiara Calligaris

    2017-06-01

    Full Text Available The significance of intra-mountain valleys to infrastructure and human settlements and the need to mitigate the geo-hazard affecting these assets are fundamental to the economy of Italian alpine regions. Therefore, there is a real need to recognize and assess possible geo-hazards affecting them. This study proposes the use of GIS-based analyses to construct a sinkhole susceptibility model based on conditioning factors such as land use, geomorphology, thickness of shallow deposits, distance to drainage network and distance to faults. Thirty-two models, applied to a test site (Enemonzo municipality, NE Italy, were produced using a method based on the Likelihood Ratio (λ function, nine with only one variable and 23 applying different combinations. The sinkhole susceptibility model with the best forecast performance, with an Area Under the Prediction Rate Curve (AUPRC of 0.88, was that combining the following parameters: Nearest Sinkhole Distance (NSD, land use and thickness of the surficial deposits. The introduction of NSD as a continuous variable in the computation represents an important upgrade in the prediction capability of the model. Additionally, the model was refined using a kernel density estimation that produced a significant improvement in the forecast performance.

  19. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  20. 3-D waveform tomography sensitivity kernels for anisotropic media

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  1. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  2. System identification via sparse multiple kernel-based regularization using sequential convex optimization techniques

    Chen, Tianshi; Andersen, Martin Skovgaard; Ljung, Lennart

    2014-01-01

    Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels...

  3. Interaction between UO2 kernel and pyrocarbon coating in irradiated and unirradiated HTR fuel particles

    Drago, A.; Klersy, R.; Simoni, O.; Schrader, K.H.

    1975-08-01

    Experimental observations on unidirectional UO 2 kernel migration in TRISO type coated particle fuels are reported. An analysis of the experimental results on the basis of data and models from the literature is reported. The stoichiometric composition of the kernel is considered the main parameter that, associated with a temperature gradient, controls the unidirectional kernel migration

  4. Anatomically-aided PET reconstruction using the kernel method.

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  5. Randomized Item Response Theory Models

    Fox, Gerardus J.A.

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by

  6. 7 CFR 981.7 - Edible kernel.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  7. 7 CFR 981.408 - Inedible kernel.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  8. 7 CFR 981.8 - Inedible kernel.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  9. Multivariate realised kernels

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...

  10. Clustering via Kernel Decomposition

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  11. Covariant Spectator Theory of heavy–light and heavy mesons and the predictive power of covariant interaction kernels

    Leitão, Sofia, E-mail: sofia.leitao@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Stadler, Alfred, E-mail: stadler@uevora.pt [Departamento de Física, Universidade de Évora, 7000-671 Évora (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Peña, M.T., E-mail: teresa.pena@tecnico.ulisboa.pt [Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Biernat, Elmar P., E-mail: elmar.biernat@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2017-01-10

    The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.

  12. Global Polynomial Kernel Hazard Estimation

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  13. Structural Damage Detection using Frequency Response Function Index and Surrogate Model Based on Optimized Extreme Learning Machine Algorithm

    R. Ghiasi

    2017-09-01

    Full Text Available Utilizing surrogate models based on artificial intelligence methods for detecting structural damages has attracted the attention of many researchers in recent decades. In this study, a new kernel based on Littlewood-Paley Wavelet (LPW is proposed for Extreme Learning Machine (ELM algorithm to improve the accuracy of detecting multiple damages in structural systems.  ELM is used as metamodel (surrogate model of exact finite element analysis of structures in order to efficiently reduce the computational cost through updating process. In the proposed two-step method, first a damage index, based on Frequency Response Function (FRF of the structure, is used to identify the location of damages. In the second step, the severity of damages in identified elements is detected using ELM. In order to evaluate the efficacy of ELM, the results obtained from the proposed kernel were compared with other kernels proposed for ELM as well as Least Square Support Vector Machine algorithm. The solved numerical problems indicated that ELM algorithm accuracy in detecting structural damages is increased drastically in case of using LPW kernel.

  14. Robotic intelligence kernel

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  15. Deep kernel learning method for SAR image target recognition

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  16. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  17. Training Lp norm multiple kernel learning in the primal.

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Radiogenomics and radiotherapy response modeling

    El Naqa, Issam; Kerns, Sarah L.; Coates, James; Luo, Yi; Speers, Corey; West, Catharine M. L.; Rosenstein, Barry S.; Ten Haken, Randall K.

    2017-08-01

    Advances in patient-specific information and biotechnology have contributed to a new era of computational medicine. Radiogenomics has emerged as a new field that investigates the role of genetics in treatment response to radiation therapy. Radiation oncology is currently attempting to embrace these recent advances and add to its rich history by maintaining its prominent role as a quantitative leader in oncologic response modeling. Here, we provide an overview of radiogenomics starting with genotyping, data aggregation, and application of different modeling approaches based on modifying traditional radiobiological methods or application of advanced machine learning techniques. We highlight the current status and potential for this new field to reshape the landscape of outcome modeling in radiotherapy and drive future advances in computational oncology.

  19. Analytic scattering kernels for neutron thermalization studies

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  20. Pavement Aging Model by Response Surface Modeling

    Manzano-Ramírez A.

    2011-10-01

    Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.

  1. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  2. 7 CFR 981.9 - Kernel weight.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  3. 7 CFR 51.2295 - Half kernel.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  4. A kernel version of spatial factor analysis

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  5. kernel oil by lipolytic organisms

    USER

    2010-08-02

    Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.

  6. Benchmarking CCMI models' top-of-atmosphere flux in the 9.6-µm ozone band using AURA TES Instantaneous Radiative Kernel

    Kuai, L.; Bowman, K. W.; Worden, H. M.; Paulot, F.; Paynter, D.; Oman, L.; Strode, S. A.; Rozanov, E.; Stenke, A.; Revell, L. E.; Plummer, D. A.

    2017-12-01

    The estimated ozone radiative forcing (RF) from chemical-climate models range widely from +0.2 to +0.6 Wm-2. The reason has never been well understood. Since the ozone absorption in the 9.6 μm band contributes 97% of the O3 longwave RF, the variation of outgoing longwave radiation (OLR) due to ozone is dominant by this band. The observed TOA flux over 9.6 µm ozone band by Thermal Emission Spectrometer (TES) shows the global distribution has unique spatial patterns. In addition, the simulated TOA fluxes over 9.6 µm ozone band by different models have never been evaluated against observations. The bias of TOA flux from model could be primarily contributed by the bias of temperature, water vapor and ozone. Furthermore, the sensitivity of TOA flux to tropospheric ozone (instantaneous radiative kernel, IRK) may also affected by these biases (Kuai et al., 2017). The bias in TOA flux would eventually propagate into model calculations of ozone RF and cause divergence of the predictions of future climate by models. In this study, we applied the observation-based IRK product by AURA TES to attribute the CCMI model bias in TOA flux over 9.6 µm ozone band to ozone, water vapor, air temperature, and surface temperature. The comparisons of the three CCMI models (AM3, SOCOL3 and GEOCCM) to TES observations suggest that 1) all models underestimate the TOA flux at tropics and subtropics. 2) The TOA flux bias is comparable similar by AM3 and GEOSCC (-0.2 to -0.3 W/m2) however is larger for the relative young model, SOCOL3 (-0.4 to -0.6 W/m2). 3) The contributions by surface temperature are similarly moderate (-0.2 W/m2). 4) The contribution of ozone is largest by SOCOL3 (-0.3 W/m2), smallest by GEOSCCM (less than 0.1 W/m2) and moderate by AM3 (-0.2 W/m2). 5) Overall, the contributions by atmospheric temperature are all small (less than 0.1 W/m2). 6) The contribution of water vapor is negative and small by both SOCOL3 and GEOSCCM (0.1 W/m2) however large and positive by AM3 (0

  7. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  8. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  9. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  10. Intra-individual gait patterns across different time-scales as revealed by means of a supervised learning model using kernel-based discriminant regression.

    Fabian Horst

    Full Text Available Traditionally, gait analysis has been centered on the idea of average behavior and normality. On one hand, clinical diagnoses and therapeutic interventions typically assume that average gait patterns remain constant over time. On the other hand, it is well known that all our movements are accompanied by a certain amount of variability, which does not allow us to make two identical steps. The purpose of this study was to examine changes in the intra-individual gait patterns across different time-scales (i.e., tens-of-mins, tens-of-hours.Nine healthy subjects performed 15 gait trials at a self-selected speed on 6 sessions within one day (duration between two subsequent sessions from 10 to 90 mins. For each trial, time-continuous ground reaction forces and lower body joint angles were measured. A supervised learning model using a kernel-based discriminant regression was applied for classifying sessions within individual gait patterns.Discernable characteristics of intra-individual gait patterns could be distinguished between repeated sessions by classification rates of 67.8 ± 8.8% and 86.3 ± 7.9% for the six-session-classification of ground reaction forces and lower body joint angles, respectively. Furthermore, the one-on-one-classification showed that increasing classification rates go along with increasing time durations between two sessions and indicate that changes of gait patterns appear at different time-scales.Discernable characteristics between repeated sessions indicate continuous intrinsic changes in intra-individual gait patterns and suggest a predominant role of deterministic processes in human motor control and learning. Natural changes of gait patterns without any externally induced injury or intervention may reflect continuous adaptations of the motor system over several time-scales. Accordingly, the modelling of walking by means of average gait patterns that are assumed to be near constant over time needs to be reconsidered in the

  11. Intra-individual gait patterns across different time-scales as revealed by means of a supervised learning model using kernel-based discriminant regression.

    Horst, Fabian; Eekhoff, Alexander; Newell, Karl M; Schöllhorn, Wolfgang I

    2017-01-01

    Traditionally, gait analysis has been centered on the idea of average behavior and normality. On one hand, clinical diagnoses and therapeutic interventions typically assume that average gait patterns remain constant over time. On the other hand, it is well known that all our movements are accompanied by a certain amount of variability, which does not allow us to make two identical steps. The purpose of this study was to examine changes in the intra-individual gait patterns across different time-scales (i.e., tens-of-mins, tens-of-hours). Nine healthy subjects performed 15 gait trials at a self-selected speed on 6 sessions within one day (duration between two subsequent sessions from 10 to 90 mins). For each trial, time-continuous ground reaction forces and lower body joint angles were measured. A supervised learning model using a kernel-based discriminant regression was applied for classifying sessions within individual gait patterns. Discernable characteristics of intra-individual gait patterns could be distinguished between repeated sessions by classification rates of 67.8 ± 8.8% and 86.3 ± 7.9% for the six-session-classification of ground reaction forces and lower body joint angles, respectively. Furthermore, the one-on-one-classification showed that increasing classification rates go along with increasing time durations between two sessions and indicate that changes of gait patterns appear at different time-scales. Discernable characteristics between repeated sessions indicate continuous intrinsic changes in intra-individual gait patterns and suggest a predominant role of deterministic processes in human motor control and learning. Natural changes of gait patterns without any externally induced injury or intervention may reflect continuous adaptations of the motor system over several time-scales. Accordingly, the modelling of walking by means of average gait patterns that are assumed to be near constant over time needs to be reconsidered in the context of

  12. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  13. Kernel versions of some orthogonal transformations

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  14. An Approximate Approach to Automatic Kernel Selection.

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  15. Resummed memory kernels in generalized system-bath master equations

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  16. Integral equations with contrasting kernels

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  17. Kernel learning algorithms for face recognition

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  18. Modelling of HTR (High Temperature Reactor Pebble-Bed 10 MW to Determine Criticality as A Variations of Enrichment and Radius of the Fuel (Kernel With the Monte Carlo Code MCNP4C

    Hammam Oktajianto

    2014-12-01

    Full Text Available Gas-cooled nuclear reactor is a Generation IV reactor which has been receiving significant attention due to many desired characteristics such as inherent safety, modularity, relatively low cost, short construction period, and easy financing. High temperature reactor (HTR pebble-bed as one of type of gas-cooled reactor concept is getting attention. In HTR pebble-bed design, radius and enrichment of the fuel kernel are the key parameter that can be chosen freely to determine the desired value of criticality. This paper models HTR pebble-bed 10 MW and determines an effective of enrichment and radius of the fuel (Kernel to get criticality value of reactor. The TRISO particle coated fuel particle which was modelled explicitly and distributed in the fuelled region of the fuel pebbles using a Simple-Cubic (SC lattice. The pebble-bed balls and moderator balls distributed in the core zone using a Body-Centred Cubic lattice with assumption of a fresh fuel by the fuel enrichment was 7-17% at 1% range and the size of the fuel radius was 175-300 µm at 25 µm ranges. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP4C. The details of model are discussed with necessary simplifications. Criticality calculations were conducted by Monte Carlo transport code MCNP4C and continuous energy nuclear data library ENDF/B-VI. From calculation results can be concluded that an effective of enrichment and radius of fuel (Kernel to achieve a critical condition was the enrichment of 15-17% at a radius of 200 µm, the enrichment of 13-17% at a radius of 225 µm, the enrichments of 12-15% at radius of 250 µm, the enrichments of 11-14% at a radius of 275 µm and the enrichment of 10-13% at a radius of 300 µm, so that the effective of enrichments and radii of fuel (Kernel can be considered in the HTR 10 MW. Keywords—MCNP4C, HTR, enrichment, radius, criticality 

  19. RTOS kernel in portable electrocardiograph

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  20. RTOS kernel in portable electrocardiograph

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  1. Protein fold recognition using geometric kernel data fusion.

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  2. Semi-Supervised Kernel PCA

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  3. COMPARISON OF PARTIAL LEAST SQUARES REGRESSION METHOD ALGORITHMS: NIPALS AND PLS-KERNEL AND AN APPLICATION

    ELİF BULUT

    2013-06-01

    Full Text Available Partial Least Squares Regression (PLSR is a multivariate statistical method that consists of partial least squares and multiple linear regression analysis. Explanatory variables, X, having multicollinearity are reduced to components which explain the great amount of covariance between explanatory and response variable. These components are few in number and they don’t have multicollinearity problem. Then multiple linear regression analysis is applied to those components to model the response variable Y. There are various PLSR algorithms. In this study NIPALS and PLS-Kernel algorithms will be studied and illustrated on a real data set.

  4. Option Valuation with Volatility Components, Fat Tails, and Non-Monotonic Pricing Kernels

    Babaoglu, Kadir; Christoffersen, Peter; Heston, Steven L.

    We nest multiple volatility components, fat tails and a U-shaped pricing kernel in a single option model and compare their contribution to describing returns and option data. All three features lead to statistically significant model improvements. A U-shaped pricing kernel is economically most im...

  5. Kernel machine methods for integrative analysis of genome-wide methylation and genotyping studies.

    Zhao, Ni; Zhan, Xiang; Huang, Yen-Tsung; Almli, Lynn M; Smith, Alicia; Epstein, Michael P; Conneely, Karen; Wu, Michael C

    2018-03-01

    Many large GWAS consortia are expanding to simultaneously examine the joint role of DNA methylation in addition to genotype in the same subjects. However, integrating information from both data types is challenging. In this paper, we propose a composite kernel machine regression model to test the joint epigenetic and genetic effect. Our approach works at the gene level, which allows for a common unit of analysis across different data types. The model compares the pairwise similarities in the phenotype to the pairwise similarities in the genotype and methylation values; and high correspondence is suggestive of association. A composite kernel is constructed to measure the similarities in the genotype and methylation values between pairs of samples. We demonstrate through simulations and real data applications that the proposed approach can correctly control type I error, and is more robust and powerful than using only the genotype or methylation data in detecting trait-associated genes. We applied our method to investigate the genetic and epigenetic regulation of gene expression in response to stressful life events using data that are collected from the Grady Trauma Project. Within the kernel machine testing framework, our methods allow for heterogeneity in effect sizes, nonlinear, and interactive effects, as well as rapid P-value computation. © 2017 WILEY PERIODICALS, INC.

  6. Multiple Kernel Learning with Data Augmentation

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  7. A kernel version of multivariate alteration detection

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  8. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  9. Complex use of cottonseed kernels

    Glushenkova, A I

    1977-01-01

    A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.

  10. GRIM : Leveraging GPUs for Kernel integrity monitoring

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  11. Paramecium: An Extensible Object-Based Kernel

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  12. Local Observed-Score Kernel Equating

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  13. Veto-Consensus Multiple Kernel Learning

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  14. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  15. On flame kernel formation and propagation in premixed gases

    Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  16. Scientific Computing Kernels on the Cell Processor

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  17. Novel images extraction model using improved delay vector variance feature extraction and multi-kernel neural network for EEG detection and prediction.

    Ge, Jing; Zhang, Guoping

    2015-01-01

    Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.

  18. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  19. Bayesian Frequency Domain Identification of LTI Systems with OBFs kernels

    Darwish, M.A.H.; Lataire, J.P.G.; Tóth, R.

    2017-01-01

    Regularised Frequency Response Function (FRF) estimation based on Gaussian process regression formulated directly in the frequency-domain has been introduced recently The underlying approach largely depends on the utilised kernel function, which encodes the relevant prior knowledge on the system

  20. Matrix kernels for MEG and EEG source localization and imaging

    Mosher, J.C.; Lewis, P.S.; Leahy, R.M.

    1994-01-01

    The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell's equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ''gain'' or ''transfer'' matrices used in multiple dipole and source imaging models

  1. Participation of cob tissue in the transport of medium components into maize kernels cultured in vitro

    Felker, F.C.

    1990-01-01

    Maize (Zea mays L.) kernels cultured in vitro while still attached to cob pieces have been used as a model system to study the physiology of kernel development. In this study, the role of the cob tissue in uptake of medium components into kernels was examined. Cob tissue was essential for in vitro kernel growth, and better growth occurred with larger cob/kernel ratios. A symplastically transported fluorescent dye readily permeated the endosperm when supplied in the medium, while an apoplastic dye did not. Slicing the cob tissue to disrupt vascular connections, but not apoplastic continuity, greatly reduced [ 14 C]sucrose uptake into kernels. [ 14 C]Sucrose uptake by cob and kernel tissue was reduced 31% and 68%, respectively, by 5 mM PCMBS. L-[ 14 C]glucose was absorbed much more slowly than D-[ 14 C]glucose. These and other results indicate that phloem loading of sugars occurs in the cob tissue. Passage of medium components through the symplast cob tissue may be a prerequisite for uptake into the kernel. Simple diffusion from the medium to the kernels is unlikely. Therefore, the ability of substances to be transported into cob tissue cells should be considered in formulating culture medium

  2. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  3. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Wigner functions defined with Laplace transform kernels.

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  5. Consistent Valuation across Curves Using Pricing Kernels

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  6. Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing

    Pinson, Pierre; Madsen, Henrik

    . The obtained ensemble forecasts of wind power are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which are recursively estimated in order to maximize the overall skill of obtained...

  7. Analysis and regularization of the thin-wire integral equation with reduced kernel

    Beurden, van M.C.; Tijhuis, A.G.

    2007-01-01

    For the straight wire, modeled as a hollow tube, we establish a conditional equivalence relation between the integral equations with exact and reduced kernel. This relation allows us to examine the existence and uniqueness conditions for the integral equation with reduced kernel, based on a local

  8. Genetic, Genomic, and Breeding Approaches to Further Explore Kernel Composition Traits and Grain Yield in Maize

    Da Silva, Helena Sofia Pereira

    2009-01-01

    Maize ("Zea mays L.") is a model species well suited for the dissection of complex traits which are often of commercial value. The purpose of this research was to gain a deeper understanding of the genetic control of maize kernel composition traits starch, protein, and oil concentration, and also kernel weight and grain yield. Germplasm with…

  9. Testing Infrastructure for Operating System Kernel Development

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  10. Kernel parameter dependence in spatial factor analysis

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  11. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. STRESS RESPONSE STUDIES USING ANIMAL MODELS

    This presentation will provide the evidence that ozone exposure in animal models induce neuroendocrine stress response and this stress response modulates lung injury and inflammation through adrenergic and glucocorticoid receptors.

  13. Delimiting areas of endemism through kernel interpolation.

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  14. Delimiting areas of endemism through kernel interpolation.

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  15. Multiscale asymmetric orthogonal wavelet kernel for linear programming support vector learning and nonlinear dynamic systems identification.

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2014-05-01

    Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.

  16. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  17. Rational kernels for Arabic Root Extraction and Text Classification

    Attia Nehar

    2016-04-01

    Full Text Available In this paper, we address the problems of Arabic Text Classification and root extraction using transducers and rational kernels. We introduce a new root extraction approach on the basis of the use of Arabic patterns (Pattern Based Stemmer. Transducers are used to model these patterns and root extraction is done without relying on any dictionary. Using transducers for extracting roots, documents are transformed into finite state transducers. This document representation allows us to use and explore rational kernels as a framework for Arabic Text Classification. Root extraction experiments are conducted on three word collections and yield 75.6% of accuracy. Classification experiments are done on the Saudi Press Agency dataset and N-gram kernels are tested with different values of N. Accuracy and F1 report 90.79% and 62.93% respectively. These results show that our approach, when compared with other approaches, is promising specially in terms of accuracy and F1.

  18. Response Styles in the Partial Credit Model

    Tutz, Gerhard; Schauberger, Gunther; Berger, Moritz

    2016-01-01

    In the modelling of ordinal responses in psychological measurement and survey- based research, response styles that represent specific answering patterns of respondents are typically ignored. One consequence is that estimates of item parameters can be poor and considerably biased. The focus here is on the modelling of a tendency to extreme or middle categories. An extension of the Partial Credit Model is proposed that explicitly accounts for this specific response style. In contrast to exi...

  19. Theoretical developments for interpreting kernel spectral clustering from alternative viewpoints

    Diego Peluffo-Ordóñez

    2017-08-01

    Full Text Available To perform an exploration process over complex structured data within unsupervised settings, the so-called kernel spectral clustering (KSC is one of the most recommended and appealing approaches, given its versatility and elegant formulation. In this work, we explore the relationship between (KSC and other well-known approaches, namely normalized cut clustering and kernel k-means. To do so, we first deduce a generic KSC model from a primal-dual formulation based on least-squares support-vector machines (LS-SVM. For experiments, KSC as well as other consider methods are assessed on image segmentation tasks to prove their usability.

  20. Validation of Born Traveltime Kernels

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  1. RKRD: Runtime Kernel Rootkit Detection

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  2. A survey of kernel-type estimators for copula and their applications

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  3. A multi-resolution approach to heat kernels on discrete surfaces

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.

  4. On Geodesic Exponential Kernels

    Feragen, Aasa; Lauze, François; Hauberg, Søren

    2015-01-01

    This extended abstract summarizes work presented at CVPR 2015 [1]. Standard statistics and machine learning tools require input data residing in a Euclidean space. However, many types of data are more faithfully represented in general nonlinear metric spaces or Riemannian manifolds, e.g. shapes, ......, symmetric positive definite matrices, human poses or graphs. The underlying metric space captures domain specific knowledge, e.g. non-linear constraints, which is available a priori. The intrinsic geodesic metric encodes this knowledge, often leading to improved statistical models....

  5. The effect of STDP temporal kernel structure on the learning dynamics of single excitatory and inhibitory synapses.

    Yotam Luz

    Full Text Available Spike-Timing Dependent Plasticity (STDP is characterized by a wide range of temporal kernels. However, much of the theoretical work has focused on a specific kernel - the "temporally asymmetric Hebbian" learning rules. Previous studies linked excitatory STDP to positive feedback that can account for the emergence of response selectivity. Inhibitory plasticity was associated with negative feedback that can balance the excitatory and inhibitory inputs. Here we study the possible computational role of the temporal structure of the STDP. We represent the STDP as a superposition of two processes: potentiation and depression. This allows us to model a wide range of experimentally observed STDP kernels, from Hebbian to anti-Hebbian, by varying a single parameter. We investigate STDP dynamics of a single excitatory or inhibitory synapse in purely feed-forward architecture. We derive a mean-field-Fokker-Planck dynamics for the synaptic weight and analyze the effect of STDP structure on the fixed points of the mean field dynamics. We find a phase transition along the Hebbian to anti-Hebbian parameter from a phase that is characterized by a unimodal distribution of the synaptic weight, in which the STDP dynamics is governed by negative feedback, to a phase with positive feedback characterized by a bimodal distribution. The critical point of this transition depends on general properties of the STDP dynamics and not on the fine details. Namely, the dynamics is affected by the pre-post correlations only via a single number that quantifies its overlap with the STDP kernel. We find that by manipulating the STDP temporal kernel, negative feedback can be induced in excitatory synapses and positive feedback in inhibitory. Moreover, there is an exact symmetry between inhibitory and excitatory plasticity, i.e., for every STDP rule of inhibitory synapse there exists an STDP rule for excitatory synapse, such that their dynamics is identical.

  6. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies

    Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300

  7. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.

    Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.

  8. Corruption clubs: empirical evidence from kernel density estimates

    Herzfeld, T.; Weiss, Ch.

    2007-01-01

    A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to

  9. The Kernel Estimation in Biosystems Engineering

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  10. Formal truncations of connected kernel equations

    Dixon, R.M.

    1977-01-01

    The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems

  11. Cold moderator scattering kernels

    MacFarlane, R.E.

    1989-01-01

    New thermal-scattering-law files in ENDF format have been developed for solid methane, liquid methane liquid ortho- and para-hydrogen, and liquid ortho- and para-deuterium using up-to-date models that include such effects as incoherent elastic scattering in the solid, diffusion and hindered vibration and rotations in the liquids, and spin correlations for the hydrogen and deuterium. These files were generated with the new LEAPR module of the NJOY Nuclear Data Processing System. Other modules of this system were used to produce cross sections for these moderators in the correct format for the continuous-energy Monte Carlo code (MCNP) being used for cold-moderator-design calculations at the Los Alamos Neutron Scattering Center (LANSCE). 20 refs., 14 figs

  12. Stochastic Still Water Response Model

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2002-01-01

    In this study a stochastic field model for the still water loading is formulated where the statistics (mean value, standard deviation, and correlation) of the sectional forces are obtained by integration of the load field over the relevant part of the ship structure. The objective of the model is...... out that an important parameter of the stochastic cargo field model is the mean number of containers delivered by each customer.......In this study a stochastic field model for the still water loading is formulated where the statistics (mean value, standard deviation, and correlation) of the sectional forces are obtained by integration of the load field over the relevant part of the ship structure. The objective of the model...... is to establish the stochastic load field conditional on a given draft and trim of the vessel. The model contributes to a realistic modelling of the stochastic load processes to be used in a reliability evaluation of the ship hull. Emphasis is given to container vessels. The formulation of the model for obtaining...

  13. Theory of reproducing kernels and applications

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  14. Convergence of barycentric coordinates to barycentric kernels

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  15. Convergence of barycentric coordinates to barycentric kernels

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  16. Kernel principal component analysis for change detection

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  17. Composite population kernels in ytterbium-buffer collisions studied by means of laser-saturated absorption

    Zhu, X.

    1986-01-01

    We present a systematic study of composite population kernels for 174 Yb collisions with He, Ar, and Xe buffer gases, using laser-saturation spectroscopy. 174 Yb is chosen as the active species because of the simple structure of its 1 S 0 - 3 P 1 resonance transition (lambda = 556 nm). Elastic collisions are modeled by means of a composite collision kernel, an expression of which is explicitly derived based on arguments of a hard-sphere potential and two-category collisions. The corresponding coupled population-rate equations are solved by iteration to obtain an expression for the saturated-absorption line shape. This expression is fit to the data to obtain information about the composite kernel, along with reasonable values for other parameters. The results confirm that a composite kernel is more general and realistic than a single-component kernel, and the generality in principle and the practical necessity of the former are discussed

  18. Learning molecular energies using localized graph kernels

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  19. a Geographic Weighted Regression for Rural Highways Crashes Modelling Using the Gaussian and Tricube Kernels: a Case Study of USA Rural Highways

    Aghayari, M.; Pahlavani, P.; Bigdeli, B.

    2017-09-01

    Based on world health organization (WHO) report, driving incidents are counted as one of the eight initial reasons for death in the world. The purpose of this paper is to develop a method for regression on effective parameters of highway crashes. In the traditional methods, it was assumed that the data are completely independent and environment is homogenous while the crashes are spatial events which are occurring in geographic space and crashes have spatial data. Spatial data have spatial features such as spatial autocorrelation and spatial non-stationarity in a way working with them is going to be a bit difficult. The proposed method has implemented on a set of records of fatal crashes that have been occurred in highways connecting eight east states of US. This data have been recorded between the years 2007 and 2009. In this study, we have used GWR method with two Gaussian and Tricube kernels. The Number of casualties has been considered as dependent variable and number of persons in crash, road alignment, number of lanes, pavement type, surface condition, road fence, light condition, vehicle type, weather, drunk driver, speed limitation, harmful event, road profile, and junction type have been considered as explanatory variables according to previous studies in using GWR method. We have compered the results of implementation with OLS method. Results showed that R2 for OLS method is 0.0654 and for the proposed method is 0.9196 that implies the proposed GWR is better method for regression in rural highway crashes.

  20. A generalized linear model for estimating spectrotemporal receptive fields from responses to natural sounds.

    Ana Calabrese

    2011-01-01

    Full Text Available In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF, a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM. In this model, each cell's input is described by: 1 a stimulus filter (STRF; and 2 a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs and modulation limited (ml noise. We compare this model to normalized reverse correlation (NRC, the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.

  1. Process for producing metal oxide kernels and kernels so obtained

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  2. Hilbertian kernels and spline functions

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  3. Notes on a storage manager for the Clouds kernel

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  4. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  5. Accuracy of approximations of solutions to Fredholm equations by kernel methods

    Gnecco, G.; Kůrková, Věra; Sanguineti, M.

    2012-01-01

    Roč. 218, č. 14 (2012), s. 7481-7497 ISSN 0096-3003 R&D Projects: GA ČR GAP202/11/1368; GA MŠk OC10047 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 “Complexity of Neural -Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : approximate solutions to integral equations * radial and kernel-based networks * Gaussian kernels * model complexity * analysis of algorithms Subject RIV: IN - Informatics, Computer Science Impact factor: 1.349, year: 2012

  6. A GEOGRAPHIC WEIGHTED REGRESSION FOR RURAL HIGHWAYS CRASHES MODELLING USING THE GAUSSIAN AND TRICUBE KERNELS: A CASE STUDY OF USA RURAL HIGHWAYS

    M. Aghayari

    2017-09-01

    Full Text Available Based on world health organization (WHO report, driving incidents are counted as one of the eight initial reasons for death in the world. The purpose of this paper is to develop a method for regression on effective parameters of highway crashes. In the traditional methods, it was assumed that the data are completely independent and environment is homogenous while the crashes are spatial events which are occurring in geographic space and crashes have spatial data. Spatial data have spatial features such as spatial autocorrelation and spatial non-stationarity in a way working with them is going to be a bit difficult. The proposed method has implemented on a set of records of fatal crashes that have been occurred in highways connecting eight east states of US. This data have been recorded between the years 2007 and 2009. In this study, we have used GWR method with two Gaussian and Tricube kernels. The Number of casualties has been considered as dependent variable and number of persons in crash, road alignment, number of lanes, pavement type, surface condition, road fence, light condition, vehicle type, weather, drunk driver, speed limitation, harmful event, road profile, and junction type have been considered as explanatory variables according to previous studies in using GWR method. We have compered the results of implementation with OLS method. Results showed that R2 for OLS method is 0.0654 and for the proposed method is 0.9196 that implies the proposed GWR is better method for regression in rural highway crashes.

  7. Model for Managing Corporate Social Responsibility

    Tamara Vlastelica Bakić

    2015-05-01

    Full Text Available As a crossfuncional process in the organization, effective management of corporate social responsibility requires a definition of strategies, programs and an action plan that structures this process from its initiation to the measurement of end effects. Academic literature on the topic of corporate social responsibility is mainly focused on the exploration of the business case for the concept, i.e., the determination of effects of social responsibility on individual aspects of the business. Scientific research so far has shown not to have been committed to formalizing management concept in this domain to a satisfactory extent; it is for this reason that this paper attempts to present one model for managing corporate social responsibility. The model represents a contribution to the theory and business practice of corporate social responsibility, as it offers a strategic framework for systematic planning, implementation and evaluation of socially responsible activities and programs.

  8. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  9. Mitigation of artifacts in rtm with migration kernel decomposition

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  10. The conceptual model of organization social responsibility

    LUO, Lan; WEI, Jingfu

    2014-01-01

    With the developing of the research of CSR, people more and more deeply noticethat the corporate should take responsibility. Whether other organizations besides corporatesshould not take responsibilities beyond their field? This paper puts forward theconcept of organization social responsibility on the basis of the concept of corporate socialresponsibility and other theories. And the conceptual models are built based on theconception, introducing the OSR from three angles: the types of organi...

  11. Hidden Markov Item Response Theory Models for Responses and Response Times.

    Molenaar, Dylan; Oberski, Daniel; Vermunt, Jeroen; De Boeck, Paul

    2016-01-01

    Current approaches to model responses and response times to psychometric tests solely focus on between-subject differences in speed and ability. Within subjects, speed and ability are assumed to be constants. Violations of this assumption are generally absorbed in the residual of the model. As a result, within-subject departures from the between-subject speed and ability level remain undetected. These departures may be of interest to the researcher as they reflect differences in the response processes adopted on the items of a test. In this article, we propose a dynamic approach for responses and response times based on hidden Markov modeling to account for within-subject differences in responses and response times. A simulation study is conducted to demonstrate acceptable parameter recovery and acceptable performance of various fit indices in distinguishing between different models. In addition, both a confirmatory and an exploratory application are presented to demonstrate the practical value of the modeling approach.

  12. PERI - auto-tuning memory-intensive kernels for multicore

    Williams, S; Carter, J; Oliker, L; Shalf, J; Yelick, K; Bailey, D; Datta, K

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4x improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications

  13. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  14. An Adaptive Genetic Association Test Using Double Kernel Machines.

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  15. Sentiment classification with interpolated information diffusion kernels

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  16. Evolution kernel for the Dirac field

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  17. Improving the Bandwidth Selection in Kernel Equating

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  18. Kernel Korner : The Linux keyboard driver

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  19. Response moderation models for conditional dependence between response time and response accuracy.

    Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan

    2017-05-01

    It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.

  20. Overview of real-time kernels at the Superconducting Super Collider Laboratory

    Low, K.; Acharya, S.; Allen, M.; Faught, E.; Haenni, D.; Kalbfleisch, C.

    1991-05-01

    The Superconducting Super Collider Laboratory (SSCL) will have many subsystems that will require real-time microprocessor control. Examples of such sub-systems requiring real-time controls are power supply ramp generators and quench protection monitors for the superconducting magnets. We plan on using a commercial multitasking real-time kernel in these systems. These kernels must perform in a consistent, reliable and efficient manner. Actual performance measurements have been conducted on four different kernels, all running on the same hardware platform. The measurements fall into two categories. Throughput measurements covering the ''non-real-time'' aspects of the kernel include process creation/termination times, interprocess communication facilities involving messages, semaphores and shared memory and memory allocation/deallocation. Measurements concentrating on real-time response are context switch times, interrupt latencies and interrupt task response. 6 refs., 2 tabs

  1. Overview of real-time kernels at the Superconducting Super Collider Laboratory

    Low, K.; Acharya, S.; Allen, M.; Faught, E.; Haenni, D.; Kalbfleisch, C.

    1991-01-01

    The Superconducting Super Collider Laboratory (SSCL) will have many subsystems that will require real-time microprocessor control. Examples of such Sub-systems requiring real-time controls are power supply ramp generators and quench protection monitors for the superconducting magnets. The authors plan on using a commercial multitasking real-time kernel in these systems. These kernels must perform in a consistent, reliable and efficient manner. Actual performance measurements have been conducted on four different kernels, all running on the same hardware platform. The measurements fall into two categories. Throughput measurements covering the 'non-real-time' aspects of the kernel include process creation/termination times, interprocess communication facilities involving messages, semaphores and shared memory and memory allocation/deallocation. Measurements concentrating on real-time response are context switch times, interrupt latencies and interrupt task response

  2. Adaptive kernel regression for freehand 3D ultrasound reconstruction

    Alshalalfah, Abdel-Latif; Daoud, Mohammad I.; Al-Najar, Mahasen

    2017-03-01

    Freehand three-dimensional (3D) ultrasound imaging enables low-cost and flexible 3D scanning of arbitrary-shaped organs, where the operator can freely move a two-dimensional (2D) ultrasound probe to acquire a sequence of tracked cross-sectional images of the anatomy. Often, the acquired 2D ultrasound images are irregularly and sparsely distributed in the 3D space. Several 3D reconstruction algorithms have been proposed to synthesize 3D ultrasound volumes based on the acquired 2D images. A challenging task during the reconstruction process is to preserve the texture patterns in the synthesized volume and ensure that all gaps in the volume are correctly filled. This paper presents an adaptive kernel regression algorithm that can effectively reconstruct high-quality freehand 3D ultrasound volumes. The algorithm employs a kernel regression model that enables nonparametric interpolation of the voxel gray-level values. The kernel size of the regression model is adaptively adjusted based on the characteristics of the voxel that is being interpolated. In particular, when the algorithm is employed to interpolate a voxel located in a region with dense ultrasound data samples, the size of the kernel is reduced to preserve the texture patterns. On the other hand, the size of the kernel is increased in areas that include large gaps to enable effective gap filling. The performance of the proposed algorithm was compared with seven previous interpolation approaches by synthesizing freehand 3D ultrasound volumes of a benign breast tumor. The experimental results show that the proposed algorithm outperforms the other interpolation approaches.

  3. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    Zhu, Xiaofeng; Xie, Qing; Zhu, Yonghua; Liu, Xingyi; Zhang, Shichao

    2015-01-01

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple

  4. Modeling silicon diode energy response factors for use in therapeutic photon beams.

    Eklund, Karin; Ahnesjö, Anders

    2009-10-21

    Silicon diodes have good spatial resolution, which makes them advantageous over ionization chambers for dosimetry in fields with high dose gradients. However, silicon diodes overrespond to low-energy photons, that are more abundant in scatter which increase with large fields and larger depths. We present a cavity-theory-based model for a general response function for silicon detectors at arbitrary positions within photon fields. The model uses photon and electron spectra calculated from fluence pencil kernels. The incident photons are treated according to their energy through a bipartition of the primary beam photon spectrum into low- and high-energy components. Primary electrons from the high-energy component are treated according to Spencer-Attix cavity theory. Low-energy primary photons together with all scattered photons are treated according to large cavity theory supplemented with an energy-dependent factor K(E) to compensate for energy variations in the electron equilibrium. The depth variation of the response for an unshielded silicon detector has been calculated for 5 x 5 cm(2), 10 x 10 cm(2) and 20 x 20 cm(2) fields in 6 and 15 MV beams and compared with measurements showing that our model calculates response factors with deviations less than 0.6%. An alternative method is also proposed, where we show that one can use a correlation with the scatter factor to determine the detector response of silicon diodes with an error of less than 3% in 6 MV and 15 MV photon beams.

  5. Modeling silicon diode energy response factors for use in therapeutic photon beams

    Eklund, Karin; Ahnesjoe, Anders

    2009-01-01

    Silicon diodes have good spatial resolution, which makes them advantageous over ionization chambers for dosimetry in fields with high dose gradients. However, silicon diodes overrespond to low-energy photons, that are more abundant in scatter which increase with large fields and larger depths. We present a cavity-theory-based model for a general response function for silicon detectors at arbitrary positions within photon fields. The model uses photon and electron spectra calculated from fluence pencil kernels. The incident photons are treated according to their energy through a bipartition of the primary beam photon spectrum into low- and high-energy components. Primary electrons from the high-energy component are treated according to Spencer-Attix cavity theory. Low-energy primary photons together with all scattered photons are treated according to large cavity theory supplemented with an energy-dependent factor K(E) to compensate for energy variations in the electron equilibrium. The depth variation of the response for an unshielded silicon detector has been calculated for 5 x 5 cm 2 , 10 x 10 cm 2 and 20 x 20 cm 2 fields in 6 and 15 MV beams and compared with measurements showing that our model calculates response factors with deviations less than 0.6%. An alternative method is also proposed, where we show that one can use a correlation with the scatter factor to determine the detector response of silicon diodes with an error of less than 3% in 6 MV and 15 MV photon beams.

  6. Multiscale Support Vector Learning With Projection Operator Wavelet Kernel for Nonlinear Dynamical System Identification.

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2016-02-03

    A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.

  7. Pyrcca: regularized kernel canonical correlation analysis in Python and its applications to neuroimaging

    Natalia Y Bilenko

    2016-11-01

    Full Text Available In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA. CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Pyrcca to implement cross-subject comparison in a natural movie functional magnetic resonance imaging (fMRI experiment by finding a data-driven set of functional response patterns that are similar across individuals. We validate this cross-subject comparison method in Pyrcca by predicting responses to novel natural movies across subjects. Finally, we show how Pyrcca can reveal retinotopic organization in brain responses to natural movies without the need for an explicit model.

  8. Pyrcca: Regularized Kernel Canonical Correlation Analysis in Python and Its Applications to Neuroimaging.

    Bilenko, Natalia Y; Gallant, Jack L

    2016-01-01

    In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA). CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Pyrcca to implement cross-subject comparison in a natural movie functional magnetic resonance imaging (fMRI) experiment by finding a data-driven set of functional response patterns that are similar across individuals. We validate this cross-subject comparison method in Pyrcca by predicting responses to novel natural movies across subjects. Finally, we show how Pyrcca can reveal retinotopic organization in brain responses to natural movies without the need for an explicit model.

  9. Modeling the frequency response of photovoltaic inverters

    Ernauli Christine Aprilia, A.; Cuk, V.; Cobben, J.F.G.; Ribeiro, P.F.; Kling, W.L.

    2012-01-01

    The increased presence of photovoltaic (PV) systems inevitably affects the power quality in the grid. This new reality demands grid power quality studies involving PV inverters. This paper proposes several frequency response models in the form of equivalent circuits. Models are based on laboratory

  10. Corporate Social Responsibility Agreements Model for Community ...

    Corporate Social Responsibility Agreements Model for Community ... their host communities with concomitant adverse effect on mining operations. ... sustainable community development an integral part of the mining business. This paper presents the evolutionary strategic models, with differing principles and action plans, ...

  11. Experimental data and dose-response models

    Ullrich, R.L.

    1985-01-01

    Dose-response relationships for radiation carcinogenesis have been of interest to biologists, modelers, and statisticians for many years. Despite his interest there are few instances in which there are sufficient experimental data to allow the fitting of various dose-response models. In those experimental systems for which data are available the dose-response curves for tumor induction for the various systems cannot be described by a single model. Dose-response models which have been observed following acute exposures to gamma rays include threshold, quadratic, and linear models. Data on sex, age, and environmental influences of dose suggest a strong role of host factors on the dose response. With decreasing dose rate the effectiveness of gamma ray irradiation tends to decrease in essentially every instance. In those cases in which the high dose rate dose response could be described by a quadratic model, the effect of dose rate is consistent with predictions based on radiation effects on the induction of initial events. Whether the underlying reasons for the observed dose-rate effect is a result of effects on the induction of initial events or is due to effects on the subsequent steps in the carcinogenic process is unknown. Information on the dose response for tumor induction for high LET (linear energy transfer) radiations such as neutrons is even more limited. The observed dose and dose rate data for tumor induction following neutron exposure are complex and do not appear to be consistent with predictions based on models for the induction of initial events

  12. On defining and computing fuzzy kernels on L-valued simple graphs

    Bisdorff, R.; Roubens, M.

    1996-01-01

    In this paper we introduce the concept of fuzzy kernels defined on valued-finite simple graphs in a sense close to fuzzy preference modelling. First we recall the classic concept of kernel associated with a crisp binary relation defined on a finite set. In a second part, we introduce fuzzy binary relations. In a third part, we generalize the crisp kernel concept to such fuzzy binary relations and in a last part, we present an application to fuzzy choice functions on fuzzy outranking relations

  13. A kernel for open source drug discovery in tropical diseases.

    Ortí, Leticia; Carbajo, Rodrigo J; Pieper, Ursula; Eswar, Narayanan; Maurer, Stephen M; Rai, Arti K; Taylor, Ginger; Todd, Matthew H; Pineda-Lucena, Antonio; Sali, Andrej; Marti-Renom, Marc A

    2009-01-01

    Conventional patent-based drug development incentives work badly for the developing world, where commercial markets are usually small to non-existent. For this reason, the past decade has seen extensive experimentation with alternative R&D institutions ranging from private-public partnerships to development prizes. Despite extensive discussion, however, one of the most promising avenues-open source drug discovery-has remained elusive. We argue that the stumbling block has been the absence of a critical mass of preexisting work that volunteers can improve through a series of granular contributions. Historically, open source software collaborations have almost never succeeded without such "kernels". HERE, WE USE A COMPUTATIONAL PIPELINE FOR: (i) comparative structure modeling of target proteins, (ii) predicting the localization of ligand binding sites on their surfaces, and (iii) assessing the similarity of the predicted ligands to known drugs. Our kernel currently contains 143 and 297 protein targets from ten pathogen genomes that are predicted to bind a known drug or a molecule similar to a known drug, respectively. The kernel provides a source of potential drug targets and drug candidates around which an online open source community can nucleate. Using NMR spectroscopy, we have experimentally tested our predictions for two of these targets, confirming one and invalidating the other. The TDI kernel, which is being offered under the Creative Commons attribution share-alike license for free and unrestricted use, can be accessed on the World Wide Web at http://www.tropicaldisease.org. We hope that the kernel will facilitate collaborative efforts towards the discovery of new drugs against parasites that cause tropical diseases.

  14. Introducing etch kernels for efficient pattern sampling and etch bias prediction

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka

    2018-01-01

    Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.

  15. Dose calculation methods in photon beam therapy using energy deposition kernels

    Ahnesjoe, A.

    1991-01-01

    The problem of calculating accurate dose distributions in treatment planning of megavoltage photon radiation therapy has been studied. New dose calculation algorithms using energy deposition kernels have been developed. The kernels describe the transfer of energy by secondary particles from a primary photon interaction site to its surroundings. Monte Carlo simulations of particle transport have been used for derivation of kernels for primary photon energies form 0.1 MeV to 50 MeV. The trade off between accuracy and calculational speed has been addressed by the development of two algorithms; one point oriented with low computional overhead for interactive use and one for fast and accurate calculation of dose distributions in a 3-dimensional lattice. The latter algorithm models secondary particle transport in heterogeneous tissue by scaling energy deposition kernels with the electron density of the tissue. The accuracy of the methods has been tested using full Monte Carlo simulations for different geometries, and found to be superior to conventional algorithms based on scaling of broad beam dose distributions. Methods have also been developed for characterization of clinical photon beams in entities appropriate for kernel based calculation models. By approximating the spectrum as laterally invariant, an effective spectrum and dose distribution for contaminating charge particles are derived form depth dose distributions measured in water, using analytical constraints. The spectrum is used to calculate kernels by superposition of monoenergetic kernels. The lateral energy fluence distribution is determined by deconvolving measured lateral dose distributions by a corresponding pencil beam kernel. Dose distributions for contaminating photons are described using two different methods, one for estimation of the dose outside of the collimated beam, and the other for calibration of output factors derived from kernel based dose calculations. (au)

  16. The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice.

    Liu, Jie; Huang, Juan; Guo, Huan; Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Zhang, Xuehai; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhou, Yang; Li, Xiang; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen; Li, Qing; Yan, Jianbing

    2017-10-01

    Maize ( Zea mays ) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice ( Oryza sativa ) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1 , a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis ( Arabidopsis thaliana ) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1 ). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. © 2017 American Society of Plant Biologists. All Rights Reserved.

  17. Optimal kernel shape and bandwidth for atomistic support of continuum stress

    Ulz, Manfred H; Moran, Sean J

    2013-01-01

    The treatment of atomistic scale interactions via molecular dynamics simulations has recently found favour for multiscale modelling within engineering. The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. This kernel function derives the stress at a continuum point by averaging the contribution from atoms within a region surrounding the continuum point. This averaging volume, and therefore the associated stress at a continuum point, is highly dependent on the bandwidth and shape of the kernel. In this paper we propose an effective and entirely data-driven strategy for simultaneously computing the optimal shape and bandwidth for the kernel. We thoroughly evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point. (paper)

  18. Anisotropic hydrodynamics with a scalar collisional kernel

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  19. Multiscale modeling of mucosal immune responses

    2015-01-01

    Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T cell differentiation and tissue level cell-cell interactions was developed to illustrate the capabilities, power and scope of ENISI MSM. Background Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Implementation Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. Conclusion We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut

  20. Multiscale modeling of mucosal immune responses.

    Mei, Yongguo; Abedi, Vida; Carbo, Adria; Zhang, Xiaoying; Lu, Pinyi; Philipson, Casandra; Hontecillas, Raquel; Hoops, Stefan; Liles, Nathan; Bassaganya-Riera, Josep

    2015-01-01

    Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut inflammation. Our modeling predictions dissect the mechanisms by which effector CD4+ T cell responses contribute to tissue damage in the gut mucosa following immune dysregulation.Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T

  1. Single toxin dose-response models revisited

    Demidenko, Eugene, E-mail: eugened@dartmouth.edu [Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH03756 (United States); Glaholt, SP, E-mail: sglaholt@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States); Kyker-Snowman, E, E-mail: ek2002@wildcats.unh.edu [Department of Natural Resources and the Environment, University of New Hampshire, Durham, NH03824 (United States); Shaw, JR, E-mail: joeshaw@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Chen, CY, E-mail: Celia.Y.Chen@dartmouth.edu [Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States)

    2017-01-01

    The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the four models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.

  2. Autonomous journaling response using data model LUTS

    Jaenisch, Holger; Handley, James; Albritton, Nathaniel; Whitener, David; Burnett, Randel; Caspers, Robert; Moren, Stephen; Alexander, Thomas; Maddox, William, III; Albritton, William, Jr.

    2009-04-01

    Matching journal entries to appropriate context responses can be a daunting problem, especially when there are no salient keyword matches between the entry and the proposed library of appropriate responses. We examine a real-world application for matching interactive journaling requests for guidance to an a priori established archive of sufficient multimedia responses. We show the analysis required to enable a Data Model based algorithm to group journaling entries according to intrinsic context information and type. We demonstrate a new lookup table (LUT) classifier that exploits all available data in LUT form.

  3. Adiabatic-connection fluctuation-dissipation DFT for the structural properties of solids - The renormalized ALDA and electron gas kernels

    Patrick, Christopher E.; Thygesen, Kristian Sommer

    2015-01-01

    the atomization energy of the H2 molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA's tendency to overestimate the magnitude of the correlation energy...

  4. Migration of the ThO2 kernels under the influence of a temperature gradient

    Smith, C.L.

    1977-01-01

    Biso-coated ThO 2 fertile fuel kernels will migrate up the thermal gradients imposed across coated particles during high-temperature gas-cooled reactor (HTGR) operation. Thorium dioxide kernel migration has been studied as a function of temperature (1290 to 1705 0 C) (1563 to 1978 K) and ThO 2 kernel burnup (0.9 to 5.8 percent FIMA) in out-of-pile postirradiation thermal gradient heating experiments. The studies were conducted to obtain descriptions of migration rates that will be used in core design studies to evaluate the impact of ThO 2 migration on fertile fuel performance in an operating HTGR and to define characteristics needed by any comprehensive model describing ThO 2 kernel migration. The kinetics data generated in these postirradiation studies are consistent with in-pile data collected by investigators at Oak Ridge National Laboratory, which supports use of the more precise postirradiation heating results in HTGR core design studies. Observations of intergranular carbon deposits on the cool side of migrating kernels support the assumption that the kinetics of kernel migration are controlled by solid-state diffusion within irradiated ThO 2 kernels. The migration is characterized by a period of no migration (incubation period), followed by migration at the equilibrium rate for ThO 2 . The incubation period decreases with increasing temperature and kernel burnup. The improved understanding of the kinetics of ThO 2 kernel migration provided by this work will contribute to an optimization of HTGR core design and an increased confidence in fuel performance predictions

  5. Migration of ThO2 kernels under the influence of a temperature gradient

    Smith, C.L.

    1976-11-01

    BISO coated ThO 2 fertile fuel kernels will migrate up the thermal gradients imposed across coated particles during HTGR operation. Thorium dioxide kernel migration has been studied as a function of temperature (1300 to 1700 0 C) and ThO 2 kernel burnup (0.9 to 5.8 percent FIMA) in out-of-pile, postirradiation thermal gradient heating experiments. The studies were conducted to obtain descriptions of migration rates that will be used in core design studies to evaluate the impact of ThO 2 migration on fertile fuel performance in an operating HTGR and to define characteristics needed by any comprehensive model describing ThO 2 kernel migration. The kinetics data generated in these postirradiation studies are consistent with in-pile data collected by investigators at Oak Ridge National Laboratory, which supports use of the more precise postirradiation heating results in HTGR core design studies. Observations of intergranular carbon deposits on the cool side of migrating kernels support the assumption that the kinetics of kernel migration are controlled by solid state diffusion within irradiated ThO 2 kernels. The migration is characterized by a period of no migration (incubation period) followed by migration at the equilibrium rate for ThO 2 . The incubation period decreases with increasing temperature and kernel burnup. The improved understanding of the kinetics of ThO 2 kernel migration provided by this work will contribute to an optimization of HTGR core design and an increased confidence in fuel performance predictions

  6. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  7. Learning with Generalization Capability by Kernel Methods of Bounded Complexity

    Kůrková, Věra; Sanguineti, M.

    2005-01-01

    Roč. 21, č. 3 (2005), s. 350-367 ISSN 0885-064X R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : supervised learning * generalization * model complexity * kernel methods * minimization of regularized empirical errors * upper bounds on rates of approximate optimization Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2005

  8. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  9. NLO corrections to the Kernel of the BKP-equations

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  10. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  11. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  12. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  13. 7 CFR 51.1441 - Half-kernel.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  14. 7 CFR 51.2296 - Three-fourths half kernel.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  15. 7 CFR 981.401 - Adjusted kernel weight.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  16. 7 CFR 51.1403 - Kernel color classification.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  17. The Linux kernel as flexible product-line architecture

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  18. The Kernel Mixture Network: A Nonparametric Method for Conditional Density Estimation of Continuous Random Variables

    Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric

    2017-01-01

    This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...

  19. On the solutions of electrohydrodynamic flow with fractional differential equations by reproducing kernel method

    Akgül Ali

    2016-01-01

    Full Text Available In this manuscript we investigate electrodynamic flow. For several values of the intimate parameters we proved that the approximate solution depends on a reproducing kernel model. Obtained results prove that the reproducing kernel method (RKM is very effective. We obtain good results without any transformation or discretization. Numerical experiments on test examples show that our proposed schemes are of high accuracy and strongly support the theoretical results.

  20. Diagnostics for Linear Models With Functional Responses

    Xu, Hongquan; Shen, Qing

    2005-01-01

    Linear models where the response is a function and the predictors are vectors are useful in analyzing data from designed experiments and other situations with functional observations. Residual analysis and diagnostics are considered for such models. Studentized residuals are defined and their properties are studied. Chi-square quantile-quantile plots are proposed to check the assumption of Gaussian error process and outliers. Jackknife residuals and an associated test are proposed to det...

  1. Parsimonious Wavelet Kernel Extreme Learning Machine

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  2. Ensemble Approach to Building Mercer Kernels

    National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...

  3. Mesoscale Modelling of the Response of Aluminas

    Bourne, N. K.

    2006-01-01

    The response of polycrystalline alumina to shock is not well addressed. There are several operating mechanisms that only hypothesized which results in models which are empirical. A similar state of affairs in reactive flow modelling led to the development of mesoscale representations of the flow to illuminate operating mechanisms. In this spirit, a similar effort is undergone for a polycrystalline alumina. Simulations are conducted to observe operating mechanisms at the micron scale. A method is then developed to extend the simulations to meet response at the continuum level where measurements are made. The approach is validated by comparison with continuum experiments. The method and results are presented, and some of the operating mechanisms are illuminated by the observed response

  4. Lawyer Proliferation and the Social Responsibility Model.

    Wines, William A.

    1989-01-01

    Drawing on the model of social responsibility that colleges of business have been teaching, the boom in lawyer education is examined. It is argued that law schools are irresponsible in overselling the benefits of law school graduation, creating a surplus of lawyers whose abilities could be used as well elsewhere. (MSE)

  5. Multiple kernel learning using single stage function approximation for binary classification problems

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  6. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  7. Control Transfer in Operating System Kernels

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  8. Modelling of demand response and market power

    Kristoffersen, B.B.; Donslund, B.; Boerre Eriksen, P.

    2004-01-01

    Demand-side flexibility and demand response to high prices are prerequisites for the proper functioning of the Nordic power market. If the consumers are unwilling to respond to high prices, the market may fail the clearing, and this may result in unwanted forced demand disconnections. Being the TSO of Western Denmark, Eltra is responsible of both security of supply and the design of the power market within its area. On this basis, Eltra has developed a new mathematical model tool for analysing the Nordic wholesale market. The model is named MARS (MARket Simulation). The model is able to handle hydropower and thermal production, nuclear power and wind power. Production, demand and exchanges modelled on an hourly basis are new important features of the model. The model uses the same principles as Nord Pool (The Nordic Power Exchange), including the division of the Nordic countries into price areas. On the demand side, price elasticity is taken into account and described by a Cobb-Douglas function. Apart from simulating perfect competition markets, particular attention has been given to modelling imperfect market conditions, i.e. exercise of market power on the supply side. Market power is simulated by using game theory, including the Nash equilibrium concept. The paper gives a short description of the MARS model. Besides, focus is on the application of the model in order to illustrate the importance of demand response in the Nordic market. Simulations with different values of demand elasticity are compared. Calculations are carried out for perfect competition and for the situation in which market power is exercised by the large power producers in the Nordic countries (oligopoly). (au)

  9. Uranium kernel formation via internal gelation

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  10. Quantum tomography, phase-space observables and generalized Markov kernels

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  11. Vis-NIR hyperspectral imaging and multivariate analysis for prediction of the moisture content and hardness of Pistachio kernels roasted in different conditions

    T Mohammadi Moghaddam

    2015-09-01

    of determination (R2, the root mean square error of prediction (RMSEP and the ratio of the standard deviation of the response variable to RMSEP (known as relative performance determinant (RPD were calculated. Results and discussion: Interpretation of hyperspectral data: The results showed that the spectra of the shell, the whole kernel and the internal part of the kernel have different patterns. The internal part of thekernel had 2 peaks at 630 nm and 690 nm, while the shell and the whole kernel had 1 peak at 670 nm and 720 nm, respectively and the peak of the whole kernel was sharper than that of the shell. The highest and lowest intensities were for the internal part of the kernel and the whole kernel, respectively. The spectral slope of the internal part is higher than that of the shell and the whole kernel at 500-700 nm. The effect of different pre-processing techniques and analysis on prediction of pistachio kernels properties: In the absence of pre-processing techniques, low correlation coefficients were observed for prediction of moisture content and hardness. However, with the use of pre-processing techniques, in some models, correlation coefficient and RPD increased and the RMSEP decreased. The results revealed that ANN models would predict moisture content and textural characteristics of roasted pistachio kernels better than PLSR models. Moisture content: ANN models can predict moisture content of roasted pistachio kernels better than PLSR models. In total, PLSR models showed low RPD and R2. For all samples, RPD was lower than 1.5, indicating that the developed models do not give an accurate prediction for moisture content. The best results with ANN method were achieved using a combination of SNV, wavelet and D1 for predicting moisture content with R2 =0.907 and RMSEP=0.179. Hardness: The results indicated that ANN models can predict the hardness better than PLSR models. The best results with PLSR models were achieved using a combination of SNV, wavelet and

  12. Development of Cold Neutron Scattering Kernels for Advanced Moderators

    Granada, J. R.; Cantargi, F.

    2010-01-01

    The development of scattering kernels for a number of molecular systems was performed, including a set of hydrogeneous methylated aromatics such as toluene, mesitylene, and mixtures of those. In order to partially validate those new libraries, we compared predicted total cross sections with experimental data obtained in our laboratory. In addition, we have introduced a new model to describe the interaction of slow neutrons with solid methane in phase II (stable phase below T = 20.4 K, atmospheric pressure). Very recently, a new scattering kernel to describe the interaction of slow neutrons with solid Deuterium was also developed. The main dynamical characteristics of that system are contained in the formalism, the elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects.

  13. A motivational model for environmentally responsible behavior.

    Tabernero, Carmen; Hernández, Bernardo

    2012-07-01

    This paper presents a study examining whether self-efficacy and intrinsic motivation are related to environmentally responsible behavior (ERB). The study analysed past environmental behavior, self-regulatory mechanisms (self-efficacy, satisfaction, goals), and intrinsic and extrinsic motivation in relation to ERBs in a sample of 156 university students. Results show that all the motivational variables studied are linked to ERB. The effects of self-efficacy on ERB are mediated by the intrinsic motivation responses of the participants. A theoretical model was created by means of path analysis, revealing the power of motivational variables to predict ERB. Structural equation modeling was used to test and fit the research model. The role of motivational variables is discussed with a view to creating adequate learning contexts and experiences to generate interest and new sensations in which self-efficacy and affective reactions play an important role.

  14. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  15. Nonlinear Knowledge in Kernel-Based Multiple Criteria Programming Classifier

    Zhang, Dongling; Tian, Yingjie; Shi, Yong

    Kernel-based Multiple Criteria Linear Programming (KMCLP) model is used as classification methods, which can learn from training examples. Whereas, in traditional machine learning area, data sets are classified only by prior knowledge. Some works combine the above two classification principle to overcome the defaults of each approach. In this paper, we propose a model to incorporate the nonlinear knowledge into KMCLP in order to solve the problem when input consists of not only training example, but also nonlinear prior knowledge. In dealing with real world case breast cancer diagnosis, the model shows its better performance than the model solely based on training data.

  16. Fine-mapping of qGW4.05, a major QTL for kernel weight and size in maize.

    Chen, Lin; Li, Yong-xiang; Li, Chunhui; Wu, Xun; Qin, Weiwei; Li, Xin; Jiao, Fuchao; Zhang, Xiaojing; Zhang, Dengfeng; Shi, Yunsu; Song, Yanchun; Li, Yu; Wang, Tianyu

    2016-04-12

    Kernel weight and size are important components of grain yield in cereals. Although some information is available concerning the map positions of quantitative trait loci (QTL) for kernel weight and size in maize, little is known about the molecular mechanisms of these QTLs. qGW4.05 is a major QTL that is associated with kernel weight and size in maize. We combined linkage analysis and association mapping to fine-map and identify candidate gene(s) at qGW4.05. QTL qGW4.05 was fine-mapped to a 279.6-kb interval in a segregating population derived from a cross of Huangzaosi with LV28. By combining the results of regional association mapping and linkage analysis, we identified GRMZM2G039934 as a candidate gene responsible for qGW4.05. Candidate gene-based association mapping was conducted using a panel of 184 inbred lines with variable kernel weights and kernel sizes. Six polymorphic sites in the gene GRMZM2G039934 were significantly associated with kernel weight and kernel size. The results of linkage analysis and association mapping revealed that GRMZM2G039934 is the most likely candidate gene for qGW4.05. These results will improve our understanding of the genetic architecture and molecular mechanisms underlying kernel development in maize.

  17. A kernel plus method for quantifying wind turbine performance upgrades

    Lee, Giwhyun

    2014-04-21

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.

  18. Modeling the mechanical response of PBX 9501

    Ragaswamy, Partha [Los Alamos National Laboratory; Lewis, Matthew W [Los Alamos National Laboratory; Liu, Cheng [Los Alamos National Laboratory; Thompson, Darla G [Los Alamos National Laboratory

    2010-01-01

    An engineering overview of the mechanical response of Plastic-Bonded eXplosives (PBXs), specifically PBX 9501, will be provided with emphasis on observed mechanisms associated with different types of mechanical testing. Mechanical tests in the form of uniaxial tension, compression, cyclic loading, creep (compression and tension), and Hopkinson bar show strain rate and temperature dependence. A range of mechanical behavior is observed which includes small strain recoverable response in the form of viscoelasticity; change in stiffness and softening beyond peak strength due to damage in the form microcracks, debonding, void formation and the growth of existing voids; inelastic response in the form of irrecoverable strain as shown in cyclic tests, and viscoelastic creep combined with plastic response as demonstrated in creep and recovery tests. The main focus of this paper is to elucidate the challenges and issues involved in modeling the mechanical behavior of PBXs for simulating thermo-mechanical responses in engineering components. Examples of validation of a constitutive material model based on a few of the observed mechanisms will be demonstrated against three point bending, split Hopkinson pressure bar and Brazilian disk geometry.

  19. The BUMP model of response planning: intermittent predictive control accounts for 10 Hz physiological tremor.

    Bye, Robin T; Neilson, Peter D

    2010-10-01

    Physiological tremor during movement is characterized by ∼10 Hz oscillation observed both in the electromyogram activity and in the velocity profile. We propose that this particular rhythm occurs as the direct consequence of a movement response planning system that acts as an intermittent predictive controller operating at discrete intervals of ∼100 ms. The BUMP model of response planning describes such a system. It forms the kernel of Adaptive Model Theory which defines, in computational terms, a basic unit of motor production or BUMP. Each BUMP consists of three processes: (1) analyzing sensory information, (2) planning a desired optimal response, and (3) execution of that response. These processes operate in parallel across successive sequential BUMPs. The response planning process requires a discrete-time interval in which to generate a minimum acceleration trajectory to connect the actual response with the predicted future state of the target and compensate for executional error. We have shown previously that a response planning time of 100 ms accounts for the intermittency observed experimentally in visual tracking studies and for the psychological refractory period observed in double stimulation reaction time studies. We have also shown that simulations of aimed movement, using this same planning interval, reproduce experimentally observed speed-accuracy tradeoffs and movement velocity profiles. Here we show, by means of a simulation study of constant velocity tracking movements, that employing a 100 ms planning interval closely reproduces the measurement discontinuities and power spectra of electromyograms, joint-angles, and angular velocities of physiological tremor reported experimentally. We conclude that intermittent predictive control through sequential operation of BUMPs is a fundamental mechanism of 10 Hz physiological tremor in movement. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. A framework for multiple kernel support vector regression and its applications to siRNA efficacy prediction.

    Qiu, Shibin; Lane, Terran

    2009-01-01

    The cell defense mechanism of RNA interference has applications in gene function analysis and promising potentials in human disease therapy. To effectively silence a target gene, it is desirable to select appropriate initiator siRNA molecules having satisfactory silencing capabilities. Computational prediction for silencing efficacy of siRNAs can assist this screening process before using them in biological experiments. String kernel functions, which operate directly on the string objects representing siRNAs and target mRNAs, have been applied to support vector regression for the prediction and improved accuracy over numerical kernels in multidimensional vector spaces constructed from descriptors of siRNA design rules. To fully utilize information provided by string and numerical data, we propose to unify the two in a kernel feature space by devising a multiple kernel regression framework where a linear combination of the kernels is used. We formulate the multiple kernel learning into a quadratically constrained quadratic programming (QCQP) problem, which although yields global optimal solution, is computationally demanding and requires a commercial solver package. We further propose three heuristics based on the principle of kernel-target alignment and predictive accuracy. Empirical results demonstrate that multiple kernel regression can improve accuracy, decrease model complexity by reducing the number of support vectors, and speed up computational performance dramatically. In addition, multiple kernel regression evaluates the importance of constituent kernels, which for the siRNA efficacy prediction problem, compares the relative significance of the design rules. Finally, we give insights into the multiple kernel regression mechanism and point out possible extensions.

  1. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    Djebbi, Ramzi

    2013-08-19

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it\\'s kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  2. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2013-01-01

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it's kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  3. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  4. Soft and hard classification by reproducing kernel Hilbert space methods.

    Wahba, Grace

    2002-12-24

    Reproducing kernel Hilbert space (RKHS) methods provide a unified context for solving a wide variety of statistical modelling and function estimation problems. We consider two such problems: We are given a training set [yi, ti, i = 1, em leader, n], where yi is the response for the ith subject, and ti is a vector of attributes for this subject. The value of y(i) is a label that indicates which category it came from. For the first problem, we wish to build a model from the training set that assigns to each t in an attribute domain of interest an estimate of the probability pj(t) that a (future) subject with attribute vector t is in category j. The second problem is in some sense less ambitious; it is to build a model that assigns to each t a label, which classifies a future subject with that t into one of the categories or possibly "none of the above." The approach to the first of these two problems discussed here is a special case of what is known as penalized likelihood estimation. The approach to the second problem is known as the support vector machine. We also note some alternate but closely related approaches to the second problem. These approaches are all obtained as solutions to optimization problems in RKHS. Many other problems, in particular the solution of ill-posed inverse problems, can be obtained as solutions to optimization problems in RKHS and are mentioned in passing. We caution the reader that although a large literature exists in all of these topics, in this inaugural article we are selectively highlighting work of the author, former students, and other collaborators.

  5. NGC1300 dynamics - II. The response models

    Kalapotharakos, C.; Patsis, P. A.; Grosbøl, P.

    2010-10-01

    We study the stellar response in a spectrum of potentials describing the barred spiral galaxy NGC1300. These potentials have been presented in a previous paper and correspond to three different assumptions as regards the geometry of the galaxy. For each potential we consider a wide range of Ωp pattern speed values. Our goal is to discover the geometries and the Ωp supporting specific morphological features of NGC1300. For this purpose we use the method of response models. In order to compare the images of NGC1300 with the density maps of our models, we define a new index which is a generalization of the Hausdorff distance. This index helps us to find out quantitatively which cases reproduce specific features of NGC1300 in an objective way. Furthermore, we construct alternative models following a Schwarzschild-type technique. By this method we vary the weights of the various energy levels, and thus the orbital contribution of each energy, in order to minimize the differences between the response density and that deduced from the surface density of the galaxy, under certain assumptions. We find that the models corresponding to Ωp ~ 16 and 22 kms-1kpc-1 are able to reproduce efficiently certain morphological features of NGC1300, with each one having its advantages and drawbacks. Based on observations collected at the European Southern Observatory, Chile: programme ESO 69.A-0021. E-mail: ckalapot@phys.uoa.gr (CK); patsis@academyofathens.gr (PAP); pgrosbol@eso.org (PG)

  6. Moisture Adsorption Isotherm and Storability of Hazelnut Inshells and Kernels Produced in Oregon, USA.

    Jung, Jooyeoun; Wang, Wenjie; McGorrin, Robert J; Zhao, Yanyun

    2018-02-01

    Moisture adsorption isotherms and storability of dried hazelnut inshells and kernels produced in Oregon were evaluated and compared among cultivars, including Barcelona, Yamhill, and Jefferson. Experimental moisture adsorption data fitted to Guggenheim-Anderson-de Boer (GAB) model, showing less hygroscopic properties in Yamhill than other cultivars of inshells and kernels due to lower content of carbohydrate and protein, but higher content of fat. The safe levels of moisture content (MC, dry basis) of dried inshells and kernels for reaching kernel water activity (a w ) ≤0.65 were estimated using the GAB model as 11.3% and 5.0% for Barcelona, 9.4% and 4.2% for Yamhill, and 10.7% and 4.9% for Jefferson, respectively. Storage conditions (2 °C at 85% to 95% relative humidity [RH], 10 °C at 65% to 75% RH, and 27 °C at 35% to 45% RH), times (0, 4, 8, or 12 mo), and packaging methods (atmosphere vs. vacuum) affected MC, a w , bioactive compounds, lipid oxidation, and enzyme activity of dried hazelnut inshells or kernels. For inshells packaged at woven polypropylene bag, MC and a w of inshells and kernels (inside shells) increased at 2 and 10 °C, but decreased at 27 °C during storage. For kernels, lipid oxidation and polyphenol oxidase activity also increased with extended storage time (P adsorption and physicochemical and enzymatic stability during storage. Moisture adsorption isotherm of hazelnut inshells and kernels is useful for predicting the storability of nuts. This study found that water adsorption and storability varied among the different cultivars of nuts, in which Yamhill was less hygroscopic than Barcelona and Jefferson, thus more stable during storage. For ensuring food safety and quality of nuts during storage, each cultivar of kernels should be dried to a certain level of MC. Lipid oxidation and enzyme activity of kernel could be increased with extended storage time. Vacuum packaging was recommended to kernels for reducing moisture adsorption

  7. Population-expression models of immune response

    Stromberg, Sean P; Antia, Rustom; Nemenman, Ilya

    2013-01-01

    The immune response to a pathogen has two basic features. The first is the expansion of a few pathogen-specific cells to form a population large enough to control the pathogen. The second is the process of differentiation of cells from an initial naive phenotype to an effector phenotype which controls the pathogen, and subsequently to a memory phenotype that is maintained and responsible for long-term protection. The expansion and the differentiation have been considered largely independently. Changes in cell populations are typically described using ecologically based ordinary differential equation models. In contrast, differentiation of single cells is studied within systems biology and is frequently modeled by considering changes in gene and protein expression in individual cells. Recent advances in experimental systems biology make available for the first time data to allow the coupling of population and high dimensional expression data of immune cells during infections. Here we describe and develop population-expression models which integrate these two processes into systems biology on the multicellular level. When translated into mathematical equations, these models result in non-conservative, non-local advection-diffusion equations. We describe situations where the population-expression approach can make correct inference from data while previous modeling approaches based on common simplifying assumptions would fail. We also explore how model reduction techniques can be used to build population-expression models, minimizing the complexity of the model while keeping the essential features of the system. While we consider problems in immunology in this paper, we expect population-expression models to be more broadly applicable. (paper)

  8. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  9. Modelling structural systems for transient response analysis

    Melosh, R.J.

    1975-01-01

    This paper introduces and reports success of a direct means of determining the time periods in which a structural system behaves as a linear system. Numerical results are based on post fracture transient analyses of simplified nuclear piping systems. Knowledge of the linear response ranges will lead to improved analysis-test correlation and more efficient analyses. It permits direct use of data from physical tests in analysis and simplication of the analytical model and interpretation of its behavior. The paper presents a procedure for deducing linearity based on transient responses. Given the forcing functions and responses of discrete points of the system at various times, the process produces evidence of linearity and quantifies an adequate set of equations of motion. Results of use of the process with linear and nonlinear analyses of piping systems with damping illustrate its success. Results cover the application to data from mathematical system responses. The process is successfull with mathematical models. In loading ranges in which all modes are excited, eight digit accuracy of predictions are obtained from the equations of motion deduced. Small changes (less than 0.01%) in the norm of the transfer matrices are produced by manipulation errors for linear systems yielding evidence that nonlinearity is easily distinguished. Significant changes (greater than five %) are coincident with relatively large norms of the equilibrium correction vector in nonlinear analyses. The paper shows that deducing linearity and, when admissible, quantifying linear equations of motion from transient response data for piping systems can be achieved with accuracy comparable to that of response data

  10. Modeling of Dynamic Responses in Building Insulation

    Anna Antonyová

    2015-10-01

    Full Text Available In this research a measurement systemwas developedfor monitoring humidity and temperature in the cavity between the wall and the insulating material in the building envelope. This new technology does not disturb the insulating material during testing. The measurement system can also be applied to insulation fixed ten or twenty years earlier and sufficiently reveals the quality of the insulation. A mathematical model is proposed to characterize the dynamic responses in the cavity between the wall and the building insulation as influenced by weather conditions.These dynamic responses are manifested as a delay of both humidity and temperature changes in the cavity when compared with the changes in the ambient surrounding of the building. The process is then modeled through numerical methods and statistical analysis of the experimental data obtained using the new system of measurement.

  11. Modeling response variation for radiometric calorimeters

    Mayer, R.L. II.

    1986-01-01

    Radiometric calorimeters are widely used in the DOE complex for accountability measurements of plutonium and tritium. Proper characterization of response variation for these instruments is, therefore, vital for accurate assessment of measurement control as well as for propagation of error calculations. This is not difficult for instruments used to measure items within a narrow range of power values; however, when a single instrument is used to measure items over a wide range of power values, improper estimates of uncertainty can result since traditional error models for radiometric calorimeters assume that uncertainty is not a function of sample power. This paper describes methods which can be used to accurately estimate random response variation for calorimeters used to measure items over a wide range of sample powers. The model is applicable to the two most common modes of calorimeter operation: heater replacement and servo control. 5 refs., 4 figs., 1 tab

  12. It is only a banana-Traveltime sensitivity kernels using the unwrapped phase

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2012-01-01

    Traveltime sensitivity kernels for finite-frequency traveltimes computed using the Born or Rytov approximations admits hallow banana shaped responses in the plane of propagation and a circular doughnut shaped responses in the cross section. This suggests that finite-frequency traveltimes are insensitive to velocity information along the infinite-frequency ray path, which is obviously inaccurate and creates a disconnect in the traveltime dependency on frequency. Using the instantaneous traveltime of the wavefield, which is capable of unwrapping the phase function, we obtain traveltime sensitivity kernels that have plain banana shape responses, with the thickness of the banana governed by the investigated frequency. This result confirms that the hallow banana shape is simply a result of the wrapping of the phase of the wavefield, in which Born nor Rytov approximations can properly deal with. The instantaneous traveltime can, thus, mitigate the nonlinearity problem encountered in finite-frequency traveltime inversions that may arise from these hallow banana sensitivity kernels.

  13. Predicting Footbridge Response using Stochastic Load Models

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing so...... decisions need to be made in terms of statistical distributions of walking parameters and in terms of the parameters describing the statistical distributions. The paper explores how sensitive computations of bridge response are to some of the decisions to be made in this respect. This is useful...

  14. Responsibility modelling for civil emergency planning

    Sommerville, Ian; Storer, Timothy; Lock, Russell

    2009-01-01

    This paper presents a new approach to analysing and understanding civil emergency planning based on the notion of responsibility modelling combined with HAZOPS-style analysis of information requirements. Our goal is to represent complex contingency plans so that they can be more readily understood, so that inconsistencies can be highlighted and vulnerabilities discovered. In this paper, we outline the framework for contingency planning in the United Kingdom and introduce the notion of respons...

  15. Application of Image Texture Analysis for Evaluation of X-Ray Images of Fungal-Infected Maize Kernels

    Orina, Irene; Manley, Marena; Kucheryavskiy, Sergey V.

    2018-01-01

    The feasibility of image texture analysis to evaluate X-ray images of fungal-infected maize kernels was investigated. X-ray images of maize kernels infected with Fusarium verticillioides and control kernels were acquired using high-resolution X-ray micro-computed tomography. After image acquisition...... developed using partial least squares discriminant analysis (PLS-DA), and accuracies of 67 and 73% were achieved using first-order statistical features and GLCM extracted features, respectively. This work provides information on the possible application of image texture as method for analysing X-ray images......., homogeneity and contrast) were extracted from the side, front and top views of each kernel and used as inputs for principal component analysis (PCA). The first-order statistical image features gave a better separation of the control from infected kernels on day 8 post-inoculation. Classification models were...

  16. Detoxification of Jatropha curcas kernel cake by a novel Streptomyces fimicarius strain.

    Wang, Xing-Hong; Ou, Lingcheng; Fu, Liang-Liang; Zheng, Shui; Lou, Ji-Dong; Gomes-Laranjo, José; Li, Jiao; Zhang, Changhe

    2013-09-15

    A huge amount of kernel cake, which contains a variety of toxins including phorbol esters (tumor promoters), is projected to be generated yearly in the near future by the Jatropha biodiesel industry. We showed that the kernel cake strongly inhibited plant seed germination and root growth and was highly toxic to carp fingerlings, even though phorbol esters were undetectable by HPLC. Therefore it must be detoxified before disposal to the environment. A mathematic model was established to estimate the general toxicity of the kernel cake by determining the survival time of carp fingerling. A new strain (Streptomyces fimicarius YUCM 310038) capable of degrading the total toxicity by more than 97% in a 9-day solid state fermentation was screened out from 578 strains including 198 known strains and 380 strains isolated from air and soil. The kernel cake fermented by YUCM 310038 was nontoxic to plants and carp fingerlings and significantly promoted tobacco plant growth, indicating its potential to transform the toxic kernel cake to bio-safe animal feed or organic fertilizer to remove the environmental concern and to reduce the cost of the Jatropha biodiesel industry. Microbial strain profile essential for the kernel cake detoxification was discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. An iterative kernel based method for fourth order nonlinear equation with nonlinear boundary condition

    Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid

    2018-06-01

    This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.

  18. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  19. Modeling listeners' emotional response to music.

    Eerola, Tuomas

    2012-10-01

    An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.

  20. Collision risk in white-tailed eagles. Modelling kernel-based collision risk using satellite telemetry data in Smoela wind-power plant

    May, Roel; Nygaard, Torgeir; Dahl, Espen Lie; Reitan, Ole; Bevanger, Kjetil

    2011-05-15

    Large soaring birds of prey, such as the white-tailed eagle, are recognized to be perhaps the most vulnerable bird group regarding risk of collisions with turbines in wind-power plants. Their mortalities have called for methods capable of modelling collision risks in connection with the planning of new wind-power developments. The so-called 'Band model' estimates collision risk based on the number of birds flying through the rotor swept zone and the probability of being hit by the passing rotor blades. In the calculations for the expected collision mortality a correction factor for avoidance behaviour is included. The overarching objective of this study was to use satellite telemetry data and recorded mortality to back-calculate the correction factor for white-tailed eagles. The Smoela wind-power plant consists of 68 turbines, over an area of approximately 18 km2. Since autumn 2006 the number of collisions has been recorded on a weekly basis. The analyses were based on satellite telemetry data from 28 white-tailed eagles equipped with backpack transmitters since 2005. The correction factor (i.e. 'avoidance rate') including uncertainty levels used within the Band collision risk model for white-tailed eagles was 99% (94-100%) for spring and 100% for the other seasons. The year-round estimate, irrespective of season, was 98% (95-99%). Although the year-round estimate was similar, the correction factor for spring was higher than the correction factor of 95% derived earlier from vantage point data. The satellite telemetry data may provide an alternative way to provide insight into relative risk among seasons, and help identify periods or areas with increased risk either in a pre- or post construction situation. (Author)

  1. OS X and iOS Kernel Programming

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  2. The Classification of Diabetes Mellitus Using Kernel k-means

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  3. Object classification and detection with context kernel descriptors

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  4. Analysis of fast neutrons elastic moderator through exact solutions involving synthetic-kernels

    Moura Neto, C.; Chung, F.L.; Amorim, E.S.

    1979-07-01

    The computation difficulties in the transport equation solution applied to fast reactors can be reduced by the development of approximate models, assuming that the continuous moderation holds. Two approximations were studied. The first one was based on an expansion in Taylor's series (Fermi, Wigner, Greuling and Goertzel models), and the second involving the utilization of synthetic Kernels (Walti, Turinsky, Becker and Malaviya models). The flux obtained by the exact method is compared with the fluxes from the different models based on synthetic Kernels. It can be verified that the present study is realistic for energies smaller than the threshold for inelastic scattering, as well as in the resonance region. (Author) [pt

  5. Recursive integral equations with positive kernel for lattice calculations

    Illuminati, F.; Isopi, M.

    1990-11-01

    A Kirkwood-Salzburg integral equation, with positive defined kernel, for the states of lattice models of statistical mechanics and quantum field theory is derived. The equation is defined in the thermodynamic limit, and its iterative solution is convergent. Moreover, positivity leads to an exact a priori bound on the iteration. The equation's relevance as a reliable algorithm for lattice calculations is therefore suggested, and it is illustrated with a simple application. It should provide a viable alternative to Monte Carlo methods for models of statistical mechanics and lattice gauge theories. 10 refs

  6. Optimization of the acceptance of prebiotic beverage made from cashew nut kernels and passion fruit juice.

    Rebouças, Marina Cabral; Rodrigues, Maria do Carmo Passos; Afonso, Marcos Rodrigues Amorim

    2014-07-01

    The aim of this research was to develop a prebiotic beverage from a hydrosoluble extract of broken cashew nut kernels and passion fruit juice using response surface methodology in order to optimize acceptance of its sensory attributes. A 2(2) central composite rotatable design was used, which produced 9 formulations, which were then evaluated using different concentrations of hydrosoluble cashew nut kernel, passion fruit juice, oligofructose, and 3% sugar. The use of response surface methodology to interpret the sensory data made it possible to obtain a formulation with satisfactory acceptance which met the criteria of bifidogenic action and use of hydrosoluble cashew nut kernels by using 14% oligofructose and 33% passion fruit juice. As a result of this study, it was possible to obtain a new functional prebiotic product, which combined the nutritional and functional properties of cashew nut kernels and oligofructose with the sensory properties of passion fruit juice in a beverage with satisfactory sensory acceptance. This new product emerges as a new alternative for the industrial processing of broken cashew nut kernels, which have very low market value, enabling this sector to increase its profits. © 2014 Institute of Food Technologists®

  7. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  8. Model Predictive Control based on Finite Impulse Response Models

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  9. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  10. Fluidization calculation on nuclear fuel kernel coating

    Sukarsono; Wardaya; Indra-Suryawan

    1996-01-01

    The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program

  11. Reduced multiple empirical kernel learning machine.

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  12. Response of subassembly model with internals

    Kennedy, J.M.; Belytschko, T.

    1977-01-01

    Analytical tools have been developed and validated by controlled sets of experiments to understand the response of an accident and/or single subassembly in an LMFBR reasonably well. They have been subjected to a variety of loadings and boundary environments. Some large subassembly cluster experiments have been performed, however little analytical work has accompanied them because of the lack of suitable analytical tools. Reported are analytical approaches to: (1) development of more sophisiticated models for the subassembly internals, that is, the fuel pins and coolant; (2) development of models for representing three dimensional effects in subassemblies adjacent to the accident subassembly. These analytical developments will provide feasible capabilities for doing economical three-dimensional analysis not previously available

  13. Modeling of Cardiovascular Response to Weightlessness

    Sharp, M. Keith

    1999-01-01

    pressure and, to a limited extent, in extravascular and pedcardial hydrostatic pressure were investigated. A complete hydraulic model of the cardiovascular system was built and flown aboard the NASA KC-135 and a computer model was developed and tested in simulated microgravity. Results obtained with these models have confirmed that a simple lack of hydrostatic pressure within an artificial ventricle causes a decrease in stroke volume. When combined with the acute increase in ventricular pressure associated with the elimination of hydrostatic pressure within the vasculature and the resultant cephalad fluid shift with the models in the upright position, however, stroke volume increased in the models. Imposition of a decreased pedcardial pressure in the computer model and in a simplified hydraulic model increased stroke volume. Physiologic regional fluid shifting was also demonstrated by the models. The unifying parameter characterizing of cardiac response was diastolic ventricular transmural pressure (DVDELTAP) The elimination of intraventricular hydrostatic pressure in O-G decreased DVDELTAP stroke volume, while the elimination of intravascular hydrostatic pressure increased DVDELTAP and stroke volume in the upright posture, but reduced DVDELTAP and stroke volume in the launch posture. The release of gravity on the chest wall and its associated influence on intrathoracic pressure, simulated by a drop in extraventricular pressure4, increased DVDELTAP ans stroke volume.

  14. Response of subassembly model with internals

    Kennedy, J.M.; Belytschko, T.

    1977-01-01

    For the purpose of predicting the structural response in such accident environments, a program STRAW has been developed. This is a finite element program which can treat the structure-fluid system consisting of the coolant and the subassembly walls. Both material nonlinearities due to elastic-plastic response and geometric nonlinearities due to large displacements can be treated. The energy source can be represented either by a pressure-time history or an equation of state. Because of the lack of any simplifying symmetry in the geometry of the subassembly the program uses a quasi-three dimensional model. The cross section of the accident hexcan and the adjacent hexcan are modelled by a two-dimensional finite element mesh which represents the hexcan walls by flexural element and the internals by two-dimensional continuum elements. This mesh is coupled to a series of one-dimensional elements which represent the axial flow of the coolant and the longitudinal stiffness of the fuel pins and hexcan. The latter is of importance in the adjacent hexcan, for its lateral displacement is resisted entirely by this flexural behavior and its inertia. The adequacy of such quasi-three dimensional models has been examined by comparing the STRAW results against almost complete three-dimensonal analysis performed with the REXCAT program. In this program, the accident hexcan is represented in a true three-dimensional sense by plate-shell elements, whereas the internals are represented as axisymmetric. These comparisons indicate that the quasi-three-dimensional approach employed in STRAW is valid for a large range of pressure time histories; the fidelity of this model suffers primarily when pressure reaches a peak over a very short time, such as 5-10 microseconds

  15. Ovine model for studying pulmonary immune responses

    Joel, D.D.; Chanana, A.D.

    1984-01-01

    Anatomical features of the sheep lung make it an excellent model for studying pulmonary immunity. Four specific lung segments were identified which drain exclusively to three separate lymph nodes. One of these segments, the dorsal basal segment of the right lung, is drained by the caudal mediastinal lymph node (CMLN). Cannulation of the efferent lymph duct of the CMLN along with highly localized intrabronchial instillation of antigen provides a functional unit with which to study factors involved in development of pulmonary immune responses. Following intrabronchial immunization there was an increased output of lymphoblasts and specific antibody-forming cells in efferent CMLN lymph. Continuous divergence of efferent lymph eliminated the serum antibody response but did not totally eliminate the appearance of specific antibody in fluid obtained by bronchoalveolar lavage. In these studies localized immunization of the right cranial lobe served as a control. Efferent lymphoblasts produced in response to intrabronchial antigen were labeled with 125 I-iododeoxyuridine and their migrational patterns and tissue distribution compared to lymphoblasts obtained from the thoracic duct. The results indicated that pulmonary immunoblasts tend to relocate in lung tissue and reappear with a higher specific activity in pulmonary lymph than in thoracic duct lymph. The reverse was observed with labeled intestinal lymphoblasts. 35 references, 2 figures, 3 tables

  16. Ovine model for studying pulmonary immune responses

    Joel, D.D.; Chanana, A.D.

    1984-11-25

    Anatomical features of the sheep lung make it an excellent model for studying pulmonary immunity. Four specific lung segments were identified which drain exclusively to three separate lymph nodes. One of these segments, the dorsal basal segment of the right lung, is drained by the caudal mediastinal lymph node (CMLN). Cannulation of the efferent lymph duct of the CMLN along with highly localized intrabronchial instillation of antigen provides a functional unit with which to study factors involved in development of pulmonary immune responses. Following intrabronchial immunization there was an increased output of lymphoblasts and specific antibody-forming cells in efferent CMLN lymph. Continuous divergence of efferent lymph eliminated the serum antibody response but did not totally eliminate the appearance of specific antibody in fluid obtained by bronchoalveolar lavage. In these studies localized immunization of the right cranial lobe served as a control. Efferent lymphoblasts produced in response to intrabronchial antigen were labeled with /sup 125/I-iododeoxyuridine and their migrational patterns and tissue distribution compared to lymphoblasts obtained from the thoracic duct. The results indicated that pulmonary immunoblasts tend to relocate in lung tissue and reappear with a higher specific activity in pulmonary lymph than in thoracic duct lymph. The reverse was observed with labeled intestinal lymphoblasts. 35 references, 2 figures, 3 tables.

  17. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  18. Variable kernel density estimation in high-dimensional feature spaces

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  19. On methods to increase the security of the Linux kernel

    Matvejchikov, I.V.

    2014-01-01

    Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru

  20. Linear and kernel methods for multi- and hypervariate change detection

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  1. Kernel methods in orthogonalization of multi- and hypervariate data

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  2. Using Response Times to Assess Learning Progress: A Joint Model for Responses and Response Times

    Wang, Shiyu; Zhang, Susu; Douglas, Jeff; Culpepper, Steven

    2018-01-01

    Analyzing students' growth remains an important topic in educational research. Most recently, Diagnostic Classification Models (DCMs) have been used to track skill acquisition in a longitudinal fashion, with the purpose to provide an estimate of students' learning trajectories in terms of the change of fine-grained skills overtime. Response time…

  3. Grid Integration of Aggregated Demand Response, Part 2: Modeling Demand Response in a Production Cost Model

    Hummon, Marissa [National Renewable Energy Lab. (NREL), Golden, CO (United States); Palchak, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Denholm, Paul [National Renewable Energy Lab. (NREL), Golden, CO (United States); Jorgenson, Jennie [National Renewable Energy Lab. (NREL), Golden, CO (United States); Olsen, Daniel J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kiliccote, Sila [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Matson, Nance [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sohn, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rose, Cody [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dudley, Junqiao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Goli, Sasank [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ma, Ookie [U.S. Dept. of Energy, Washington, DC (United States)

    2013-12-01

    This report is one of a series stemming from the U.S. Department of Energy (DOE) Demand Response and Energy Storage Integration Study. This study is a multi-national-laboratory effort to assess the potential value of demand response (DR) and energy storage to electricity systems with different penetration levels of variable renewable resources and to improve our understanding of associatedmarkets and institutions. This report implements DR resources in the commercial production cost model PLEXOS.

  4. Prediction Models for Dynamic Demand Response

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  5. Mitigation of artifacts in rtm with migration kernel decomposition

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  6. Relationship between attenuation coefficients and dose-spread kernels

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  7. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  8. Consistent Estimation of Pricing Kernels from Noisy Price Data

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  9. The Value of Response Times in Item Response Modeling

    Molenaar, Dylan

    2015-01-01

    A new and very interesting approach to the analysis of responses and response times is proposed by Goldhammer (this issue). In his approach, differences in the speed-ability compromise within respondents are considered to confound the differences in ability between respondents. These confounding effects of speed on the inferences about ability can…

  10. Upport vector machines for nonlinear kernel ARMA system identification.

    Martínez-Ramón, Manel; Rojo-Alvarez, José Luis; Camps-Valls, Gustavo; Muñioz-Marí, Jordi; Navia-Vázquez, Angel; Soria-Olivas, Emilio; Figueiras-Vidal, Aníbal R

    2006-11-01

    Nonlinear system identification based on support vector machines (SVM) has been usually addressed by means of the standard SVM regression (SVR), which can be seen as an implicit nonlinear autoregressive and moving average (ARMA) model in some reproducing kernel Hilbert space (RKHS). The proposal of this letter is twofold. First, the explicit consideration of an ARMA model in an RKHS (SVM-ARMA2K) is proposed. We show that stating the ARMA equations in an RKHS leads to solving the regularized normal equations in that RKHS, in terms of the autocorrelation and cross correlation of the (nonlinearly) transformed input and output discrete time processes. Second, a general class of SVM-based system identification nonlinear models is presented, based on the use of composite Mercer's kernels. This general class can improve model flexibility by emphasizing the input-output cross information (SVM-ARMA4K), which leads to straightforward and natural combinations of implicit and explicit ARMA models (SVR-ARMA2K and SVR-ARMA4K). Capabilities of these different SVM-based system identification schemes are illustrated with two benchmark problems.

  11. Quantum logic in dagger kernel categories

    Heunen, C.; Jacobs, B.P.F.

    2009-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  12. Quantum logic in dagger kernel categories

    Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P.

    2011-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  13. Symbol recognition with kernel density matching.

    Zhang, Wan; Wenyin, Liu; Zhang, Kun

    2006-12-01

    We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

  14. Flexible Scheduling in Multimedia Kernels: An Overview

    Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.

    1999-01-01

    Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more

  15. Reproducing kernel Hilbert spaces of Gaussian priors

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  16. A synthesis of empirical plant dispersal kernels

    Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.

    2017-01-01

    Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016

  17. Analytic continuation of weighted Bergman kernels

    Engliš, Miroslav

    2010-01-01

    Roč. 94, č. 6 (2010), s. 622-650 ISSN 0021-7824 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * analytic continuation * Toeplitz operator Subject RIV: BA - General Mathematics Impact factor: 1.450, year: 2010 http://www.sciencedirect.com/science/article/pii/S0021782410000942

  18. On convergence of kernel learning estimators

    Norkin, V.I.; Keyzer, M.A.

    2009-01-01

    The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability

  19. Analytic properties of the Virasoro modular kernel

    Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)

    2017-06-15

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)

  20. Kernel based subspace projection of hyperspectral images

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  1. Kernel Temporal Differences for Neural Decoding

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  2. Scattering kernels and cross sections working group

    Russell, G.; MacFarlane, B.; Brun, T.

    1998-01-01

    Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics

  3. Enhanced gluten properties in soft kernel durum wheat

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  4. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  5. 7 CFR 981.60 - Determination of kernel weight.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  6. 21 CFR 176.350 - Tamarind seed kernel powder.

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  7. End-use quality of soft kernel durum wheat

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  8. Heat kernel analysis for Bessel operators on symmetric cones

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  9. A Fast and Simple Graph Kernel for RDF

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  10. 7 CFR 981.61 - Redetermination of kernel weight.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  11. Single pass kernel k-means clustering method

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  12. Kuramoto model for infinite graphs with kernels

    Canale, Eduardo; Tembine, Hamidou; Tempone, Raul; Zouraris, Georgios E.

    2015-01-01

    . We focus on circulant graphs which have enough symmetries to make the computations easier. We then focus on the asymptotic regime where an integro-partial differential equation is derived. Numerical analysis and convergence proofs of the Fokker

  13. Kernel method for clustering based on optimal target vector

    Angelini, Leonardo; Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano

    2006-01-01

    We introduce Ising models, suitable for dichotomic clustering, with couplings that are (i) both ferro- and anti-ferromagnetic (ii) depending on the whole data-set and not only on pairs of samples. Couplings are determined exploiting the notion of optimal target vector, here introduced, a link between kernel supervised and unsupervised learning. The effectiveness of the method is shown in the case of the well-known iris data-set and in benchmarks of gene expression levels, where it works better than existing methods for dichotomic clustering

  14. Performance analysis and kernel size study of the Lynx real-time operating system

    Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.

    1993-01-01

    This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.

  15. The Visualization and Analysis of POI Features under Network Space Supported by Kernel Density Estimation

    YU Wenhao

    2015-01-01

    Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.

  16. A Wavelet Kernel-Based Primal Twin Support Vector Machine for Economic Development Prediction

    Fang Su

    2013-01-01

    Full Text Available Economic development forecasting allows planners to choose the right strategies for the future. This study is to propose economic development prediction method based on the wavelet kernel-based primal twin support vector machine algorithm. As gross domestic product (GDP is an important indicator to measure economic development, economic development prediction means GDP prediction in this study. The wavelet kernel-based primal twin support vector machine algorithm can solve two smaller sized quadratic programming problems instead of solving a large one as in the traditional support vector machine algorithm. Economic development data of Anhui province from 1992 to 2009 are used to study the prediction performance of the wavelet kernel-based primal twin support vector machine algorithm. The comparison of mean error of economic development prediction between wavelet kernel-based primal twin support vector machine and traditional support vector machine models trained by the training samples with the 3–5 dimensional input vectors, respectively, is given in this paper. The testing results show that the economic development prediction accuracy of the wavelet kernel-based primal twin support vector machine model is better than that of traditional support vector machine.

  17. QTL Mapping of Kernel Number-Related Traits and Validation of One Major QTL for Ear Length in Maize.

    Huo, Dongao; Ning, Qiang; Shen, Xiaomeng; Liu, Lei; Zhang, Zuxin

    2016-01-01

    The kernel number is a grain yield component and an important maize breeding goal. Ear length, kernel number per row and ear row number are highly correlated with the kernel number per ear, which eventually determines the ear weight and grain yield. In this study, two sets of F2:3 families developed from two bi-parental crosses sharing one inbred line were used to identify quantitative trait loci (QTL) for four kernel number-related traits: ear length, kernel number per row, ear row number and ear weight. A total of 39 QTLs for the four traits were identified in the two populations. The phenotypic variance explained by a single QTL ranged from 0.4% to 29.5%. Additionally, 14 overlapping QTLs formed 5 QTL clusters on chromosomes 1, 4, 5, 7, and 10. Intriguingly, six QTLs for ear length and kernel number per row overlapped in a region on chromosome 1. This region was designated qEL1.10 and was validated as being simultaneously responsible for ear length, kernel number per row and ear weight in a near isogenic line-derived population, suggesting that qEL1.10 was a pleiotropic QTL with large effects. Furthermore, the performance of hybrids generated by crossing 6 elite inbred lines with two near isogenic lines at qEL1.10 showed the breeding value of qEL1.10 for the improvement of the kernel number and grain yield of maize hybrids. This study provides a basis for further fine mapping, molecular marker-aided breeding and functional studies of kernel number-related traits in maize.

  18. Scuba: scalable kernel-based gene prioritization.

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  19. Constitutive modeling of shock response of PTFE

    Brown, Eric N [Los Alamos National Laboratory; Reanyansky, Anatoly D [DSTO, AUSTRALIA; Bourne, Neil K [AWE, UK; Millett, Jeremy C F [AWE, UK

    2009-01-01

    The PTFE (polytetrafluoroethylene) material is complex and attracts attention of the shock physics researchers because it has amorphous and crystalline components. In turn, the crystalline component has four known phases with the high pressure transition to phase III. At the same time, as has been recently studied using spectrometry, the crystalline region is growing with load. Stress and velocity shock-wave profiles acquired recently with embedded gauges demonstrate feature that may be related to impedance mismatches between the regions subjected to some transitions resulting in density and modulus variations. We consider the above mentioned amorphous-to-crystalline transition and the high pressure Phase II-to-III transitions as possible candidates for the analysis. The present work utilizes a multi-phase rate sensitive model to describe shock response of the PTFE material. One-dimensional experimental shock wave profiles are compared with calculated profiles with the kinetics describing the transitions. The objective of this study is to understand the role of the various transitions in the shock response of PTFE.

  20. Study on the scattering law and scattering kernel of hydrogen in zirconium hydride

    Jiang Xinbiao; Chen Wei; Chen Da; Yin Banghua; Xie Zhongsheng

    1999-01-01

    The nuclear analytical model of calculating scattering law and scattering kernel for the uranium zirconium hybrid reactor is described. In the light of the acoustic and optic model of zirconium hydride, its frequency distribution function f(ω) is given and the scattering law of hydrogen in zirconium hydride is obtained by GASKET. The scattering kernel σ l (E 0 →E) of hydrogen bound in zirconium hydride is provided by the SMP code in the standard WIMS cross section library. Along with this library, WIMS is used to calculate the thermal neutron energy spectrum of fuel cell. The results are satisfied

  1. The kernel G1(x,x') and the quantum equivalence principle

    Ceccatto, H.; Foussats, A.; Giacomini, H.; Zandron, O.

    1981-01-01

    In this paper, it is re-examined the formulation of the quantum equivalence principle (QEP) and its compatibility with the conditions which must be fulfilled by the kernel G 1 (x,x') is discussed. It is also determined the base of solutions which give the particle model in a curved space-time in terms of Cauchy's data for such a kernel. Finally, it is analyzed the creation of particles in this model by studying the time evolution of creation and annihilation operators. This method is an alternative to one that uses Bogoliubov's transformation as a mechanism of creation. (author)

  2. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    Liu, Bo

    2015-11-11

    We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

  3. A Mathematical Model of Cardiovascular Response to Dynamic Exercise

    Magosso, E

    2001-01-01

    A mathematical model of cardiovascular response to dynamic exercise is presented, The model includes the pulsating heart, the systemic and pulmonary, circulation, a functional description of muscle...

  4. Landslide Susceptibility Mapping Based on Particle Swarm Optimization of Multiple Kernel Relevance Vector Machines: Case of a Low Hill Area in Sichuan Province, China

    Yongliang Lin

    2016-10-01

    Full Text Available In this paper, we propose a multiple kernel relevance vector machine (RVM method based on the adaptive cloud particle swarm optimization (PSO algorithm to map landslide susceptibility in the low hill area of Sichuan Province, China. In the multi-kernel structure, the kernel selection problem can be solved by adjusting the kernel weight, which determines the single kernel contribution of the final kernel mapping. The weights and parameters of the multi-kernel function were optimized using the PSO algorithm. In addition, the convergence speed of the PSO algorithm was increased using cloud theory. To ensure the stability of the prediction model, the result of a five-fold cross-validation method was used as the fitness of the PSO algorithm. To verify the results, receiver operating characteristic curves (ROC and landslide dot density (LDD were used. The results show that the model that used a heterogeneous kernel (a combination of two different kernel functions had a larger area under the ROC curve (0.7616 and a lower prediction error ratio (0.28% than did the other types of kernel models employed in this study. In addition, both the sum of two high susceptibility zone LDDs (6.71/100 km2 and the sum of two low susceptibility zone LDDs (0.82/100 km2 demonstrated that the landslide susceptibility map based on the heterogeneous kernel model was closest to the historical landslide distribution. In conclusion, the results obtained in this study can provide very useful information for disaster prevention and land-use planning in the study area.

  5. Sorption Kinetics for the Removal of Cadmium and Zinc onto Palm Kernel Shell Based Activated Carbon

    Muhammad Muhammad

    2010-12-01

    Full Text Available The kinetics and mechanism of cadmium and zinc adsorption on palm kernel shell based activated carbons (PKSAC have been studied. A series of batch laboratory studies were conducted in order to investigate the suitability of palm kernel shell based activated carbon (PKSAC for the removal of cadmium (cadmium ions and zinc (zinc ions from their aqueous solutions. All batch experiments were carried out at pH 7.0 and a constant temperature of 30+-1°C using an incubator shaker that operated at 150 rpm. The kinetics investigated includes the pseudo first order, the pseudo-second order and the intraparticle diffusion models. The pseudo-second order model correlate excellently the experimental data, suggesting that chemisorption processes could be the rate-limiting step. Keywords: adsorption, cadmium, kinetics, palm kernel shell, zinc

  6. Option Valuation with Volatility Components, Fat Tails, and Nonlinear Pricing Kernels

    Babaoglu, Kadir Gokhan; Christoffersen, Peter; Heston, Steven

    We nest multiple volatility components, fat tails and a U-shaped pricing kernel in a single option model and compare their contribution to describing returns and option data. All three features lead to statistically significant model improvements. A second volatility factor is economically most i...

  7. Kernel based orthogonalization for change detection in hyperspectral images

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  8. A laser optical method for detecting corn kernel defects

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  9. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  10. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  11. Difference between standard and quasi-conformal BFKL kernels

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  12. TIDALLY HEATED TERRESTRIAL EXOPLANETS: VISCOELASTIC RESPONSE MODELS

    Henning, Wade G.; O'Connell, Richard J.; Sasselov, Dimitar D.

    2009-01-01

    Tidal friction in exoplanet systems, driven by orbits that allow for durable nonzero eccentricities at short heliocentric periods, can generate internal heating far in excess of the conditions observed in our own solar system. Secular perturbations or a notional 2:1 resonance between a hot Earth and hot Jupiter can be used as a baseline to consider the thermal evolution of convecting bodies subject to strong viscoelastic tidal heating. We compare results first from simple models using a fixed Quality factor and Love number, and then for three different viscoelastic rheologies: the Maxwell body, the Standard Anelastic Solid (SAS), and the Burgers body. The SAS and Burgers models are shown to alter the potential for extreme tidal heating by introducing the possibility of new equilibria and multiple response peaks. We find that tidal heating tends to exceed radionuclide heating at periods below 10-30 days, and exceed insolation only below 1-2 days. Extreme cases produce enough tidal heat to initiate global-scale partial melting, and an analysis of tidal limiting mechanisms such as advective cooling for earthlike planets is discussed. To explore long-term behaviors, we map equilibria points between convective heat loss and tidal heat input as functions of eccentricity. For the periods and magnitudes discussed, we show that tidal heating, if significant, is generally detrimental to the width of habitable zones.

  13. Learning a peptide-protein binding affinity predictor with kernel ridge regression

    2013-01-01

    Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting

  14. Association mapping for kernel phytosterol content in almond

    Carolina eFont i Forcada

    2015-07-01

    Full Text Available Almond kernels are a rich source of phytosterols, which are important compounds for human nutrition. The genetic control of phytosterol content has not yet been documented in almond. Association mapping, also known as linkage disequilibrium, was applied to an almond germplasm collection in order to provide new insight into the genetic control of total and individual sterol contents in kernels. Population structure analysis grouped the accessions into two principal groups, the Mediterranean and the non-Mediterranean. There was a strong subpopulation structure with linkage disequilibrium decaying with increasing genetic distance, resulting in lower levels of linkage disequilibrium between more distant markers. A significant impact of population structure on linkage disequilibrium in the almond cultivar groups was observed. The mean r2 value for all intra-chromosomal loci pairs was 0.040, whereas, the r2 for the inter-chromosomal loci pairs was 0.036. For analysis of association between the markers and phenotypic traits five models were tested. The mixed linear model (MLM approach using co-ancestry values from population structure and kinship estimates (K model as covariates identified a maximum of 13 significant associations. Most of the associations found appeared to map within the interval where many candidate genes involved in the sterol biosynthesis pathway are predicted in the peach genome. These findings provide a valuable foundation for quality gene identification and molecular marker assisted breeding in almond.

  15. Generalized multiple kernel learning with data-dependent priors.

    Mao, Qi; Tsang, Ivor W; Gao, Shenghua; Wang, Li

    2015-06-01

    Multiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views.

  16. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.

    Poon, Art F Y

    2015-09-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  17. Kernel-based whole-genome prediction of complex traits: a review.

    Morota, Gota; Gianola, Daniel

    2014-01-01

    Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  18. Kernel-based whole-genome prediction of complex traits: a review

    Gota eMorota

    2014-10-01

    Full Text Available Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways, thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  19. Moisture Sorption Isotherms and Properties of Sorbed Water of Neem ( Azadirichta indica A. Juss) Kernels

    Ngono Mbarga, M. C.; Bup Nde, D.; Mohagir, A.; Kapseu, C.; Elambo Nkeng, G.

    2017-01-01

    A neem tree growing abundantly in India as well as in some regions of Asia and Africa gives fruits whose kernels have about 40-50% oil. This oil has high therapeutic and cosmetic qualities and is recently projected to be an important raw material for the production of biodiesel. Its seed is harvested at high moisture contents, which leads tohigh post-harvest losses. In the paper, the sorption isotherms are determined by the static gravimetric method at 40, 50, and 60°C to establish a database useful in defining drying and storage conditions of neem kernels. Five different equations are validated for modeling the sorption isotherms of neem kernels. The properties of sorbed water, such as the monolayer moisture content, surface area of adsorbent, number of adsorbed monolayers, and the percent of bound water are also defined. The critical moisture content necessary for the safe storage of dried neem kernels is shown to range from 5 to 10% dry basis, which can be obtained at a relative humidity less than 65%. The isosteric heats of sorption at 5% moisture content are 7.40 and 22.5 kJ/kg for the adsorption and desorption processes, respectively. This work is the first, to the best of our knowledge, to give the important parameters necessary for drying and storage of neem kernels, a potential raw material for the production of oil to be used in pharmaceutics, cosmetics, and biodiesel manufacturing.

  20. Effect of Acrocomia aculeata Kernel Oil on Adiposity in Type 2 Diabetic Rats.

    Nunes, Ângela A; Buccini, Danieli F; Jaques, Jeandre A S; Portugal, Luciane C; Guimarães, Rita C A; Favaro, Simone P; Caldas, Ruy A; Carvalho, Cristiano M E

    2018-03-01

    The macauba palm (Acrocomia aculeata) is native of tropical America and is found mostly in the Cerrados and Pantanal biomes. The fruits provide an oily pulp, rich in long chain fatty acids, and a kernel that encompass more than 50% of lipids rich in medium chain fatty acids (MCFA). Based on biochemical and nutritional evidences MCFA is readily catabolized and can reduce body fat accumulation. In this study, an animal model was employed to evaluate the effect of Acrocomia aculeata kernel oil (AKO) on the blood glucose level and the fatty acid deposit in the epididymal adipose tissue. The A. aculeata kernel oil obtained by cold pressing presented suitable quality as edible oil. Its fatty acid profile indicates high concentration of MCFA, mainly lauric, capric and caprilic. Type 2 diabetic rats fed with that kernel oil showed reduction of blood glucose level in comparison with the diabetic control group. Acrocomia aculeata kernel oil showed hypoglycemic effect. A small fraction of total dietary medium chain fatty acid was accumulated in the epididymal adipose tissue of rats fed with AKO at both low and high doses and caprilic acid did not deposit at all.

  1. A class of kernel based real-time elastography algorithms.

    Kibria, Md Golam; Hasan, Md Kamrul

    2015-08-01

    In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Traveltime sensitivity kernels for wave equation tomography using the unwrapped phase

    Djebbi, Ramzi

    2014-02-18

    Wave equation tomography attempts to improve on traveltime tomography, by better adhering to the requirements of our finite-frequency data. Conventional wave equation tomography, based on the first-order Born approximation followed by cross-correlation traveltime lag measurement, or on the Rytov approximation for the phase, yields the popular hollow banana sensitivity kernel indicating that the measured traveltime at a point is insensitive to perturbations along the ray theoretical path at certain finite frequencies. Using the instantaneous traveltime, which is able to unwrap the phase of the signal, instead of the cross-correlation lag, we derive new finite-frequency traveltime sensitivity kernels. The kernel reflects more the model-data dependency, we typically encounter in full waveform inversion. This result confirms that the hollow banana shape is borne of the cross-correlation lag measurement, which exposes the Born approximations weakness in representing transmitted waves. The instantaneous traveltime can thus mitigate the additional component of nonlinearity introduced by the hollow banana sensitivity kernels in finite-frequency traveltime tomography. The instantaneous traveltime simply represents the unwrapped phase of Rytov approximation, and thus is a good alternative to Born and Rytov to compute the misfit function for wave equation tomography. We show the limitations of the cross-correlation associated with Born approximation for traveltime lag measurement when the source signatures of the measured and modelled data are different. The instantaneous traveltime is proven to be less sensitive to the distortions in the data signature. The unwrapped phase full banana shape of the sensitivity kernels shows smoother update compared to the banana–doughnut kernels. The measurement of the traveltime delay caused by a small spherical anomaly, embedded into a 3-D homogeneous model, supports the full banana sensitivity assertion for the unwrapped phase.

  3. Traveltime sensitivity kernels for wave equation tomography using the unwrapped phase

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2014-01-01

    Wave equation tomography attempts to improve on traveltime tomography, by better adhering to the requirements of our finite-frequency data. Conventional wave equation tomography, based on the first-order Born approximation followed by cross-correlation traveltime lag measurement, or on the Rytov approximation for the phase, yields the popular hollow banana sensitivity kernel indicating that the measured traveltime at a point is insensitive to perturbations along the ray theoretical path at certain finite frequencies. Using the instantaneous traveltime, which is able to unwrap the phase of the signal, instead of the cross-correlation lag, we derive new finite-frequency traveltime sensitivity kernels. The kernel reflects more the model-data dependency, we typically encounter in full waveform inversion. This result confirms that the hollow banana shape is borne of the cross-correlation lag measurement, which exposes the Born approximations weakness in representing transmitted waves. The instantaneous traveltime can thus mitigate the additional component of nonlinearity introduced by the hollow banana sensitivity kernels in finite-frequency traveltime tomography. The instantaneous traveltime simply represents the unwrapped phase of Rytov approximation, and thus is a good alternative to Born and Rytov to compute the misfit function for wave equation tomography. We show the limitations of the cross-correlation associated with Born approximation for traveltime lag measurement when the source signatures of the measured and modelled data are different. The instantaneous traveltime is proven to be less sensitive to the distortions in the data signature. The unwrapped phase full banana shape of the sensitivity kernels shows smoother update compared to the banana–doughnut kernels. The measurement of the traveltime delay caused by a small spherical anomaly, embedded into a 3-D homogeneous model, supports the full banana sensitivity assertion for the unwrapped phase.

  4. Two-Phase Iteration for Value Function Approximation and Hyperparameter Optimization in Gaussian-Kernel-Based Adaptive Critic Design

    Chen, Xin; Xie, Penghuan; Xiong, Yonghua; He, Yong; Wu, Min

    2015-01-01

    Adaptive Dynamic Programming (ADP) with critic-actor architecture is an effective way to perform online learning control. To avoid the subjectivity in the design of a neural network that serves as a critic network, kernel-based adaptive critic design (ACD) was developed recently. There are two essential issues for a static kernel-based model: how to determine proper hyperparameters in advance and how to select right samples to describe the value function. They all rely on the assessment of sa...

  5. TORCH Computational Reference Kernels - A Testbed for Computer Science Research

    Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich

    2010-12-02

    For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.

  6. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    Rui Sun

    2016-08-01

    Full Text Available Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  7. Quantized kernel least mean square algorithm.

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  8. Kernel-based tests for joint independence

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  9. Wilson Dslash Kernel From Lattice QCD Optimization

    Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  10. Development of nondestructive screening methods for single kernel characterization of wheat

    Nielsen, J.P.; Pedersen, D.K.; Munck, L.

    2003-01-01

    predictability. However, by applying an averaging approach, in which single seed replicate measurements are mathematically simulated, a very good NIT prediction model was achieved. This suggests that the single seed NIT spectra contain hardness information, but that a single seed hardness method with higher......The development of nondestructive screening methods for single seed protein, vitreousness, density, and hardness index has been studied for single kernels of European wheat. A single kernel procedure was applied involving, image analysis, near-infrared transmittance (NIT) spectroscopy, laboratory...

  11. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  12. A Kernel for Protein Secondary Structure Prediction

    Guermeur , Yann; Lifchitz , Alain; Vert , Régis

    2004-01-01

    http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...

  13. Scalar contribution to the BFKL kernel

    Gerasimov, R. E.; Fadin, V. S.

    2010-01-01

    The contribution of scalar particles to the kernel of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation is calculated. A great cancellation between the virtual and real parts of this contribution, analogous to the cancellation in the quark contribution in QCD, is observed. The reason of this cancellation is discovered. This reason has a common nature for particles with any spin. Understanding of this reason permits to obtain the total contribution without the complicated calculations, which are necessary for finding separate pieces.

  14. Weighted Bergman Kernels for Logarithmic Weights

    Engliš, Miroslav

    2010-01-01

    Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/

  15. Heat kernels and zeta functions on fractals

    Dunne, Gerald V

    2012-01-01

    On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)

  16. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  17. Sensitivity kernels for viscoelastic loading based on adjoint methods

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity

  18. Exploiting graph kernels for high performance biomedical relation extraction.

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  19. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  20. Modeling Rabbit Responses to Single and Multiple Aerosol ...

    Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev

  1. Should I stay or should I go? A habitat-dependent dispersal kernel improves prediction of movement.

    Fabrice Vinatier

    Full Text Available The analysis of animal movement within different landscapes may increase our understanding of how landscape features affect the perceptual range of animals. Perceptual range is linked to movement probability of an animal via a dispersal kernel, the latter being generally considered as spatially invariant but could be spatially affected. We hypothesize that spatial plasticity of an animal's dispersal kernel could greatly modify its distribution in time and space. After radio tracking the movements of walking insects (Cosmopolites sordidus in banana plantations, we considered the movements of individuals as states of a Markov chain whose transition probabilities depended on the habitat characteristics of current and target locations. Combining a likelihood procedure and pattern-oriented modelling, we tested the hypothesis that dispersal kernel depended on habitat features. Our results were consistent with the concept that animal dispersal kernel depends on habitat features. Recognizing the plasticity of animal movement probabilities will provide insight into landscape-level ecological processes.

  2. Should I stay or should I go? A habitat-dependent dispersal kernel improves prediction of movement.

    Vinatier, Fabrice; Lescourret, Françoise; Duyck, Pierre-François; Martin, Olivier; Senoussi, Rachid; Tixier, Philippe

    2011-01-01

    The analysis of animal movement within different landscapes may increase our understanding of how landscape features affect the perceptual range of animals. Perceptual range is linked to movement probability of an animal via a dispersal kernel, the latter being generally considered as spatially invariant but could be spatially affected. We hypothesize that spatial plasticity of an animal's dispersal kernel could greatly modify its distribution in time and space. After radio tracking the movements of walking insects (Cosmopolites sordidus) in banana plantations, we considered the movements of individuals as states of a Markov chain whose transition probabilities depended on the habitat characteristics of current and target locations. Combining a likelihood procedure and pattern-oriented modelling, we tested the hypothesis that dispersal kernel depended on habitat features. Our results were consistent with the concept that animal dispersal kernel depends on habitat features. Recognizing the plasticity of animal movement probabilities will provide insight into landscape-level ecological processes.

  3. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  4. Abrasive wear behaviour of Al-Cu-Mg/palm kernel shell ash particulate composite

    Gambo Anthony VICTOR

    2017-12-01

    Full Text Available This paper presents a systematic approach to develop a wear model of Al-Cu-Mg/Palm kernel shell ash particulate composites (PKSAp produced by double stir-casting method. Four factors, five levels, central composite, rotatable design matrix was used to optimize the number of experiments. The factors considered were sliding velocity, sliding distance, normal load and mass fraction of PKSA reinforcement in the matrix. Response surface methodology (RSM was employed to develop the mathematical model. The developed regression model was validated by statistical software MINITAB and statistical tool such as analysis of variance (ANOVA. It was found that the developed regression model could be effectively used to predict the wear rate at 95% confidence level. The regression model indicated that the wear rate of cast Al-Cu-Mg/PKSAp composite decreased with an increase in the mass fraction of PKSA and increased with an increase of the sliding velocity, sliding distance and normal load acting on the composite specimen.

  5. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  6. Robust stabilization of nonlinear systems via stable kernel representations with L2-gain bounded uncertainty

    van der Schaft, Arjan

    1995-01-01

    The approach to robust stabilization of linear systems using normalized left coprime factorizations with H∞ bounded uncertainty is generalized to nonlinear systems. A nonlinear perturbation model is derived, based on the concept of a stable kernel representation of nonlinear systems. The robust

  7. A multi-resolution approach to heat kernels on discrete surfaces

    Vaxman, Amir; Ben-Chen, Mirela; Gotsman, Craig

    2010-01-01

    process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel

  8. Automation of peanut drying with a sensor network including an in-shell kernel moisture sensor

    Peanut drying is an essential task in the processing and handling of peanuts. Peanuts leave the fields with kernel moisture contents > 20% wet basis and need to be dried to < 10.5% w.b. for grading and storage purposes. Current peanut drying processes utilize decision support software based on model...

  9. Alumina Concentration Detection Based on the Kernel Extreme Learning Machine.

    Zhang, Sen; Zhang, Tao; Yin, Yixin; Xiao, Wendong

    2017-09-01

    The concentration of alumina in the electrolyte is of great significance during the production of aluminum. The amount of the alumina concentration may lead to unbalanced material distribution and low production efficiency and affect the stability of the aluminum reduction cell and current efficiency. The existing methods cannot meet the needs for online measurement because industrial aluminum electrolysis has the characteristics of high temperature, strong magnetic field, coupled parameters, and high nonlinearity. Currently, there are no sensors or equipment that can detect the alumina concentration on line. Most companies acquire the alumina concentration from the electrolyte samples which are analyzed through an X-ray fluorescence spectrometer. To solve the problem, the paper proposes a soft sensing model based on a kernel extreme learning machine algorithm that takes the kernel function into the extreme learning machine. K-fold cross validation is used to estimate the generalization error. The proposed soft sensing algorithm can detect alumina concentration by the electrical signals such as voltages and currents of the anode rods. The predicted results show that the proposed approach can give more accurate estimations of alumina concentration with faster learning speed compared with the other methods such as the basic ELM, BP, and SVM.

  10. Kinetics of palm kernel oil and ethanol transesterification

    Ahiekpor, Julius C. [Centre for Energy, Environment and Sustainable Development (CEESD), P.O. Box FN 793, Kumasi (Ghana); Kuwornoo, David K. [Faculty of Chemical and Materials Engineering, Kwame Nkrumah University of Science and Technology (KNUST), Private Mail Bag, Kumasi (Ghana)

    2010-07-01

    Biodiesel, an alternative diesel fuel made from renewable sources such as vegetable oils and animal fats, has been identified by government to play a key role in the socio-economic development of Ghana. The utilization of biodiesel is expected to be about 10% of the total liquid fuel mix of the country by the year 2020. Despite this great potential and the numerous sources from which biodiesel could be developed in Ghana, there are no available data on the kinetics and mechanisms of transesterification of local vegetable oils. The need for local production of biodiesel necessitates that the mechanism and kinetics of the process is well understood, since the properties of the biodiesel depends on the type of oil use for the transesterification process. The objective of this work is to evaluate the appropriate kinetics mechanism and to find out the reaction rate constants for palm kernel oil transesterification with ethanol when KOH was used as a catalyst. In this present work, 16 biodiesel samples were prepared at specified times based on reported optimal conditions and the samples analysed by gas chromatography. The experimental mass fractions were calibrated and fitted to mathematical models of different proposed mechanisms in previous works.The rate data fitted well to second-order kinetics without shunt mechanism. It was also observed that, although transesterification reaction of crude palm kernel oil is a reversible reaction, the reaction rate constants indicated that the forward reactions were the most prominent.

  11. Image re-sampling detection through a novel interpolation kernel.

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  13. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  14. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  15. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  16. Kernel density estimation-based real-time prediction for respiratory motion

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  17. Response Modelling of Bitumen, Bituminous Mastic and Mortar

    Woldekidan, M.F.

    2011-01-01

    This research focuses on testing and modelling the viscoelastic response of bituminous binders. The main goal is to find an appropriate response model for bituminous binders. The desired model should allow implementation into numerical environments such as ABAQUS. On the basis of such numerical

  18. Bayes factor covariance testing in item response models

    Fox, J.P.; Mulder, J.; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  19. Bayes Factor Covariance Testing in Item Response Models

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  20. A Box-Cox normal model for response times.

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.