WorldWideScience

Sample records for heat kernel estimates

  1. A novel cortical thickness estimation method based on volumetric Laplace-Beltrami operator and heat kernel.

    Science.gov (United States)

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin

    2015-05-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Contingent kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Scott Fortmann-Roe

    Full Text Available Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a "contingent kernel density estimation" technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method.

  3. Adaptive warped kernel estimators

    OpenAIRE

    Chagny, Gaëlle

    2014-01-01

    In this work, we develop a method of adaptive nonparametric estimation, based on "warped" kernels. The aim is to estimate a real-valued function $s$ from a sample of random couples $(X,Y)$. We deal with transformed data $(\\Phi(X),Y)$, with $\\Phi$ a one-to-one function, to build a collection of kernel estimators. The data-driven bandwidth selection is done with a method inspired by Goldenshluger and Lepski~(2011). The method permits to handle various problems such as additive and multiplicativ...

  4. Multidimensional kernel estimation

    CERN Document Server

    Milosevic, Vukasin

    2015-01-01

    Kernel estimation is one of the non-parametric methods used for estimation of probability density function. Its first ROOT implementation, as part of RooFit package, has one major issue, its evaluation time is extremely slow making in almost unusable. The goal of this project was to create a new class (TKNDTree) which will follow the original idea of kernel estimation, greatly improve the evaluation time (using the TKTree class for storing the data and creating different user-controlled modes of evaluation) and add the interpolation option, for 2D case, with the help of the new Delaunnay2D class.

  5. Lévy matters VI Lévy-type processes moments, construction and heat kernel estimates

    CERN Document Server

    Kühn, Franziska

    2017-01-01

    Presenting some recent results on the construction and the moments of Lévy-type processes, the focus of this volume is on a new existence theorem, which is proved using a parametrix construction. Applications range from heat kernel estimates for a class of Lévy-type processes to existence and uniqueness theorems for Lévy-driven stochastic differential equations with Hölder continuous coefficients. Moreover, necessary and sufficient conditions for the existence of moments of Lévy-type processes are studied and some estimates on moments are derived. Lévy-type processes behave locally like Lévy processes but, in contrast to Lévy processes, they are not homogeneous in space. Typical examples are processes with varying index of stability and solutions of Lévy-driven stochastic differential equations. This is the sixth volume in a subseries of the Lecture Notes in Mathematics called Lévy Matters. Each volume describes a number of important topics in the theory or applicati ons of Lévy processes and pays ...

  6. Heat kernel estimates for pseudodifferential operators, fractional Laplacians and Dirichlet-to-Neumann operators

    DEFF Research Database (Denmark)

    Gimperlein, Heiko; Grubb, Gerd

    2014-01-01

    The purpose of this article is to establish upper and lower estimates for the integral kernel of the semigroup exp(−t P) associated to a classical, strongly elliptic pseudodifferential operator P of positive order on a closed manifold. The Poissonian bounds generalize those obtained for perturbat...... for perturbations of fractional powers of the Laplacian. In the selfadjoint case, extensions to t∈C+  are studied. In particular, our results apply to the Dirichlet-to-Neumann semigroup....

  7. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  8. Cowling–Price theorem and characterization of heat kernel on ...

    Indian Academy of Sciences (India)

    We extend the uncertainty principle, the Cowling–Price theorem, on non-compact Riemannian symmetric spaces . We establish a characterization of the heat kernel of the Laplace–Beltrami operator on from integral estimates of the Cowling–Price type.

  9. Graph Bundling by Kernel Density Estimation

    NARCIS (Netherlands)

    Hurter, C.; Ersoy, O.; Telea, A.

    We present a fast and simple method to compute bundled layouts of general graphs. For this, we first transform a given graph drawing into a density map using kernel density estimation. Next, we apply an image sharpening technique which progressively merges local height maxima by moving the convolved

  10. Heat kernel method and its applications

    CERN Document Server

    Avramidi, Ivan G

    2015-01-01

    The heart of the book is the development of a short-time asymptotic expansion for the heat kernel. This is explained in detail and explicit examples of some advanced calculations are given. In addition some advanced methods and extensions, including path integrals, jump diffusion and others are presented. The book consists of four parts: Analysis, Geometry, Perturbations and Applications. The first part shortly reviews of some background material and gives an introduction to PDEs. The second part is devoted to a short introduction to various aspects of differential geometry that will be needed later. The third part and heart of the book presents a systematic development of effective methods for various approximation schemes for parabolic differential equations. The last part is devoted to applications in financial mathematics, in particular, stochastic differential equations. Although this book is intended for advanced undergraduate or beginning graduate students in, it should also provide a useful reference ...

  11. The Kernel Estimation in Biosystems Engineering

    Directory of Open Access Journals (Sweden)

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  12. Effects of sample size on KERNEL home range estimates

    Science.gov (United States)

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  13. Sobolev Inequalities, Heat Kernels under Ricci Flow, and the Poincare Conjecture

    CERN Document Server

    Zhang, Qi S

    2010-01-01

    Focusing on Sobolev inequalities and their applications to analysis on manifolds and Ricci flow, "Sobolev Inequalities, Heat Kernels under Ricci Flow, and the Poincare Conjecture" introduces the field of analysis on Riemann manifolds and uses the tools of Sobolev imbedding and heat kernel estimates to study Ricci flows, especially with surgeries. The author explains key ideas, difficult proofs, and important applications in a succinct, accessible, and unified manner. The book first discusses Sobolev inequalities in various settings, including the Euclidean case, the Riemannian case,

  14. Automated skin lesion segmentation with kernel density estimation

    Science.gov (United States)

    Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.

    2017-07-01

    Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.

  15. Some estimates of the Bergman kernel of minimal bounded homogeneous domains

    OpenAIRE

    Ishi, Hideyuki; Yamaji, Satoshi

    2010-01-01

    We describe the Bergman kernel of any bounded homogeneous domain in a minimal realization relating to the Bergman kernels of the Siegel disks. Taking advantage of this expression, we obtain substantial estimates of the Bergman kernel of the homogeneous domain.

  16. The heat kernel as the pagerank of a graph

    Science.gov (United States)

    Chung, Fan

    2007-01-01

    The concept of pagerank was first started as a way for determining the ranking of Web pages by Web search engines. Based on relations in interconnected networks, pagerank has become a major tool for addressing fundamental problems arising in general graphs, especially for large information networks with hundreds of thousands of nodes. A notable notion of pagerank, introduced by Brin and Page and denoted by PageRank, is based on random walks as a geometric sum. In this paper, we consider a notion of pagerank that is based on the (discrete) heat kernel and can be expressed as an exponential sum of random walks. The heat kernel satisfies the heat equation and can be used to analyze many useful properties of random walks in a graph. A local Cheeger inequality is established, which implies that, by focusing on cuts determined by linear orderings of vertices using the heat kernel pageranks, the resulting partition is within a quadratic factor of the optimum. This is true, even if we restrict the volume of the small part separated by the cut to be close to some specified target value. This leads to a graph partitioning algorithm for which the running time is proportional to the size of the targeted volume (instead of the size of the whole graph).

  17. General method of boundary correction in kernel regression estimation

    African Journals Online (AJOL)

    Kernel estimators of both density and regression functions are not consistent near the nite end points of their supports. In other words, boundary eects seriously aect the performance of these estimators. In this paper, we combine the transformation and the reflection methods in order to introduce a new general method of ...

  18. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  19. Moderate deviations principles for the kernel estimator of ...

    African Journals Online (AJOL)

    Abstract. The aim of this paper is to provide pointwise and uniform moderate deviations principles for the kernel estimator of a nonrandom regression function. Moreover, we give an application of these moderate deviations principles to the construction of condence regions for the regression function. Resume. L'objectif de ...

  20. Corruption clubs: empirical evidence from kernel density estimates

    NARCIS (Netherlands)

    Herzfeld, T.; Weiss, Ch.

    2007-01-01

    A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to

  1. Observing integrals of heat kernels from a distance

    DEFF Research Database (Denmark)

    Heat kernels have integrals such as Brownian motion mean exit time, potential capacity, and torsional rigidity. We show how to obtain bounds on these values - essentially by observing their behaviour in terms of the distance function from a point and then comparing with corresponding values...... in tailor-made warped product spaces. The results will be illustrated by applications to the so-called 'type' problem: How to decide if a given manifold or surface is transient (hyperbolic) or recurrent (parabolic). Specific examples of minimal surfaces and constant pressure dry foams will be shown...

  2. Non-intrusive Load Disaggregation Based on Kernel Density Estimation

    Science.gov (United States)

    Sen, Wang; Dongsheng, Yang; Chuchen, Guo; Shengxian, Du

    2017-05-01

    Aiming at the problem of high cost and difficult implementation of high frequency non-intrusive load decomposition method, this paper proposes a new method based on kernel density estimation(KDE) for low frequency NILM (Non-intrusive load monitoring). The method establishes power reference model of electricity load in different working conditions and appliance’s possible combinations first, then probability distribution is calculated as appliances features by kernel density estimation. After that, target power data is divided by step changes, whose distributions will be compared with reference models, and the most similar reference model will be chosen as the decomposed consequence. The proposed approach was tested with data from the GREEND public data set, it showed better performance in terms of energy disaggregation accuracy compared with many traditional NILM approaches. Our results show good performance which can achieve more than 93% accuracy in simulation.

  3. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available with the KDE is non-parametric, since no parametric distribution is imposed on the estimate; instead the estimated distribution is defined by the sum of the kernel functions centred on the data points. KDEs thus require the selection of two design parameters... has become feasible – understanding and modelling high- dimensional data has thus become a crucial activity, espe- cially in the field of machine learning. Since non-parametric density estimators are data-driven and do not require or impose a pre...

  4. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.

  5. Heat kernel expansion in the background field formalism

    CERN Document Server

    Barvinsky, Andrei

    2015-01-01

    Heat kernel expansion and background field formalism represent the combination of two calculational methods within the functional approach to quantum field theory. This approach implies construction of generating functionals for matrix elements and expectation values of physical observables. These are functionals of arbitrary external sources or the mean field of a generic configuration -- the background field. Exact calculation of quantum effects on a generic background is impossible. However, a special integral (proper time) representation for the Green's function of the wave operator -- the propagator of the theory -- and its expansion in the ultraviolet and infrared limits of respectively short and late proper time parameter allow one to construct approximations which are valid on generic background fields. Current progress of quantum field theory, its renormalization properties, model building in unification of fundamental physical interactions and QFT applications in high energy physics, gravitation and...

  6. Velocity and acceleration of height growth using kernel estimation.

    Science.gov (United States)

    Gasser, T; Köhler, W; Müller, H G; Kneip, A; Largo, R; Molinari, L; Prader, A

    1984-01-01

    A method is introduced for estimating acceleration, velocity and distance of longitudinal growth curves and it is illustrated by analysing human height growth. This approach, called kernel estimation, belongs to the class of smoothing methods and does not assume an a priori fixed functional model, and not even that one and the same model is applicable for all children. The examples presented show that acceleration curves might allow a better quantification of the mid-growth spurt (MS) and a more differentiated analysis of the pubertal spurt (PS). Accelerations are prone to follow random variations present in the data, and parameters defined in terms of acceleration are, therefore, validated by a comparison with parameters defined in terms of velocity. Our non-parametric-curve-fitting approach is also compared with parametric fitting via a model suggested by Preece and Baines (1978).

  7. An Adaptive Background Subtraction Method Based on Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    Mignon Park

    2012-09-01

    Full Text Available In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.

  8. Estimating the Quadratic Variation Spectrum of Noisy Asset Prices Using Generalized Flat-top Realized Kernels

    DEFF Research Database (Denmark)

    Varneskov, Rasmus T.

    2014-01-01

    This paper analyzes a generalized class of flat-top realized kernels for estimation ot the quadratic variation spectrum,i.e. the decomposition of quadratic variation into integrated variance and jump variation, when the underlying, efficient price process is contaminated by addictive noise......-order advantage in terms of bias reduction. Extending the analysis to accommodate jumps in the underlying price process, the flat-top realized kernels are used to propose two classes of (medium) blocked realized kernels, which produce consistent, non-negative estimates of integrated variance. The blocked...

  9. Free energy on a cycle graph and trigonometric deformation of heat kernel traces on odd spheres

    Science.gov (United States)

    Kan, Nahomi; Shiraishi, Kiyoshi

    2018-01-01

    We consider a possible ‘deformation’ of the trace of the heat kernel on odd dimensional spheres, motivated by the calculation of the free energy of a scalar field on a discretized circle. By using an expansion in terms of the modified Bessel functions, we obtain the values of the free energies after a suitable regularization.

  10. A heat kernel version of Cowling–Price theorem for the Laguerre ...

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 120; Issue 1. A Heat Kernel Version of Cowling-Price Theorem for the Laguerre Hypergroup. Jizheng Huang. Volume 120 Issue 1 February 2010 pp 73-81. Fulltext. Click here to view fulltext PDF. Permanent link:

  11. Gamma Kernel Estimators for Density and Hazard Rate of Right-Censored Data

    Directory of Open Access Journals (Sweden)

    T. Bouezmarni

    2011-01-01

    Full Text Available The nonparametric estimation for the density and hazard rate functions for right-censored data using the kernel smoothing techniques is considered. The “classical” fixed symmetric kernel type estimator of these functions performs well in the interior region, but it suffers from the problem of bias in the boundary region. Here, we propose new estimators based on the gamma kernels for the density and the hazard rate functions. The estimators are free of bias and achieve the optimal rate of convergence in terms of integrated mean squared error. The mean integrated squared error, the asymptotic normality, and the law of iterated logarithm are studied. A comparison of gamma estimators with the local linear estimator for the density function and with hazard rate estimator proposed by Müller and Wang (1994, which are free from boundary bias, is investigated by simulations.

  12. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  13. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    We investigate the heat equation corresponding to the Bessel operators on a symmetric cone $Ω=G/K$. These operators form a one-parameter family of elliptic self-adjoint second order differential operators and occur in the Lie algebra action of certain unitary highest weight representations...

  14. Restoration of single image based on kernel estimation with L1-regularization method

    Science.gov (United States)

    Zhao, Minghua; Cao, Hui; Zhang, Xin; Shi, Zhenghao; Li, Peng

    2017-07-01

    Image restoration is a significant task in the fields of computer vision and image processing. Image restoration research consists of two aspects: kernel estimation and image restoration. A single image restoration method based on L1-regularized blur kernel estimation is proposed in this paper. First, a bilateral filter is used to remove the image noise effectively. Second, the improved shock filter is used to enhance the edge information of the image. Subsequently, L1-regularization method is used to estimate the blur kernel of the blurred image alternately, during which Split-Bregman algorithm is used to optimize the solution process. Finally, Hyper-Laplacian and sparse priors are applied to the image obtained from the non-blind deconvolution process. Experimental results show that compared to other methods, better restoration results as well as improved computational efficiency can be achieved with the proposed method.

  15. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  16. The heat kernel for two Aharonov-Bohm solenoids in a uniform magnetic field

    Science.gov (United States)

    Šťovíček, Pavel

    2017-01-01

    A non-relativistic quantum model is considered with a point particle carrying a charge e and moving in the plane pierced by two infinitesimally thin Aharonov-Bohm solenoids and subjected to a perpendicular uniform magnetic field of magnitude B. Relying on a technique originally due to Schulman, Laidlaw and DeWitt which is applicable to Schrödinger operators on multiply connected configuration manifolds a formula is derived for the corresponding heat kernel. As an application of the heat kernel formula, approximate asymptotic expressions are derived for the lowest eigenvalue lying above the first Landau level and for the corresponding eigenfunction while assuming that | eB | R2 /(ħ c) is large, where R is the distance between the two solenoids.

  17. New fractional derivatives with nonlocal and non-singular kernel: Theory and application to heat transfer model

    Directory of Open Access Journals (Sweden)

    Atangana Abdon

    2016-01-01

    Full Text Available In this manuscript we proposed a new fractional derivative with non-local and no-singular kernel. We presented some useful properties of the new derivative and applied it to solve the fractional heat transfer model.

  18. Assessing the regression to the mean for non-normal populations via kernel estimators.

    Science.gov (United States)

    John, Majnu; Jawad, Abbas F

    2010-07-01

    Part of the change over time of a response in longitudinal studies may be attributed to the re-gression to the mean. The component of change due to regression to the mean is more pronounced in the subjects with extreme initial values. Das and Mulder proposed a nonparametric approach to estimate the regression to the mean. In this paper, Das and Mulder's method is made data-adaptive for empirical distributions via kernel estimation approaches, while retaining the orig-inal assumptions made by them. We use the best approaches for kernel density and hazard function estimation in our methods. This makes our approach extremely user friendly for a practitioner via the state of the art procedures and packages available in statistical softwares such as SAS and R for kernel density and hazard function estimation. We also estimate the standard error of our estimates of regression to the mean via nonparametric bootstrap methods. Finally, our methods are illustrated by analyzing the percent predicted FEV1 measurements available from the Cystic Fibrosis Foundation's National Patient Registry. The kernel based approach presented in this article is a user-friendly method to assess the regression to the mean in non-normal populations.

  19. Development and application of traffic accident density estimation models using kernel density estimation

    Directory of Open Access Journals (Sweden)

    Seiji Hashimoto

    2016-06-01

    Full Text Available Traffic accident frequency has been decreasing in Japan in recent years. Nevertheless, many accidents still occur on residential roads. Area-wide traffic calming measures including Zone 30, which discourages traffic by setting a speed limit of 30 km/h in residential areas, have been implemented. However, no objective implementation method has been established. Development of a model for traffic accident density estimation explained by GIS data can enable the determination of dangerous areas objectively and easily, indicating where area-wide traffic calming can be implemented preferentially. This study examined the relations between traffic accidents and city characteristics, such as population, road factors, and spatial factors. A model was developed to estimate traffic accident density. Kernel density estimation (KDE techniques were used to assess the relations efficiently. Besides, 16 models were developed by combining accident locations, accident types, and data types. By using them, the applicability of traffic accident density estimation models was examined. Results obtained using Spearman rank correlation show high coefficients between the predicted number and the actual number. The model can indicate the relative accident risk in cities. Results of this study can be used for objective determination of areas where area-wide traffic calming can be implemented preferentially, even if sufficient traffic accident data are not available.

  20. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    Science.gov (United States)

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Soft Sensor of Vehicle State Estimation Based on the Kernel Principal Component and Improved Neural Network

    Directory of Open Access Journals (Sweden)

    Haorui Liu

    2016-01-01

    Full Text Available In the car control systems, it is hard to measure some key vehicle states directly and accurately when running on the road and the cost of the measurement is high as well. To address these problems, a vehicle state estimation method based on the kernel principal component analysis and the improved Elman neural network is proposed. Combining with nonlinear vehicle model of three degrees of freedom (3 DOF, longitudinal, lateral, and yaw motion, this paper applies the method to the soft sensor of the vehicle states. The simulation results of the double lane change tested by Matlab/SIMULINK cosimulation prove the KPCA-IENN algorithm (kernel principal component algorithm and improved Elman neural network to be quick and precise when tracking the vehicle states within the nonlinear area. This algorithm method can meet the software performance requirements of the vehicle states estimation in precision, tracking speed, noise suppression, and other aspects.

  2. Heat kernel and Weyl anomaly of Schrödinger invariant theory

    Science.gov (United States)

    Pal, Sridip; Grinstein, Benjamín

    2017-12-01

    We propose a method inspired by discrete light cone quantization to determine the heat kernel for a Schrödinger field theory (Galilean boost invariant with z =2 anisotropic scaling symmetry) living in d +1 dimensions, coupled to a curved Newton-Cartan background, starting from a heat kernel of a relativistic conformal field theory (z =1 ) living in d +2 dimensions. We use this method to show that the Schrödinger field theory of a complex scalar field cannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly Ad+1 G for Schrödinger theory is related to the Weyl anomaly of a free relativistic scalar CFT Ad+2 R via Ad+1 G=2 π δ (m )Ad+2 R , where m is the charge of the scalar field under particle number symmetry. We provide further evidence of the vanishing anomaly by evaluating Feynman diagrams in all orders of perturbation theory. We present an explicit calculation of the anomaly using a regulated Schrödinger operator, without using the null cone reduction technique. We generalize our method to show that a similar result holds for theories with a single time-derivative and with even z >2 .

  3. Kernel Density Estimation based Self learning Sampling Strategy for Motion Planning of Repetitive Tasks

    Science.gov (United States)

    2016-10-09

    Kernel Density Estimation based Self -learning Sampling Strategy for Motion Planning of Repetitive Tasks Thomas Fridolin Iversen and Lars-Peter...optimized. The system is thereby self -learning and improves performance over time. The sampler is tested on a variety of planners and against other... manages to sample fewer new configurations within the given time. The WS sampler is roughly as good as the uniform sampler, while the OB sampler is

  4. A New Entropy Formula and Gradient Estimates for the Linear Heat Equation on Static Manifold

    Directory of Open Access Journals (Sweden)

    Abimbola Abolarinwa

    2014-08-01

    Full Text Available In this paper we prove a new monotonicity formula for the heat equation via a generalized family of entropy functionals. This family of entropy formulas generalizes both Perelman’s entropy for evolving metric and Ni’s entropy on static manifold. We show that this entropy satisfies a pointwise differential inequality for heat kernel. The consequences of which are various gradient and Harnack estimates for all positive solutions to the heat equation on compact manifold.

  5. Assessing the regression to the mean for non-normal populations via kernel estimators

    OpenAIRE

    John, Majnu; Jawad, Abbas F.

    2010-01-01

    Background : Part of the change over time of a response in longitudinal studies may be attributed to the re-gression to the mean. The component of change due to regression to the mean is more pronounced in the subjects with extreme initial values. Das and Mulder proposed a nonparametric approach to estimate the regression to the mean. Aim : In this paper, Das and Mulder′s method is made data-adaptive for empirical distributions via kernel estimation approaches, while retaining the orig-inal a...

  6. About renormalization of the Yang - Mills theory and the approach to calculation of the heat kernel

    Science.gov (United States)

    Ivanov, Aleksandr

    2017-10-01

    The quantum theory of Yang - Mills in four-dimensional space - time plays an important role in modern theoretical physics. Currently, this model contains many open problems, therefore, it is of great interest to mathematicians. This work consists of several parts, however, it only offers a new approach and, therefore, it is methodological. First of all, the diagram technique and the mathematical basis will be recalled in the first part. Then the process of renormalization will be explained. It is based on momentum cut-off regularization and described in [1] and [2]. However, this type of the regularization has several problems, as a result, only the first correction is calculated. After common constructions and observations, the first correction will be described in detail. Namely, the heat kernel will be considered since it plays a main role in this formalism. In particular, the method for calculating of coefficients of arbitrary order will be proposed.

  7. ESTIMASI MODEL REGRESI SEMIPARAMETRIK MENGGUNAKAN ESTIMATOR KERNEL UNIFORM (Studi Kasus: Pasien DBD di RS Puri Raharja

    Directory of Open Access Journals (Sweden)

    ANNA FITRIANI

    2015-11-01

    Full Text Available Semiparametric regression model approach is a model approach that combines parametric regression models and nonparametric regression. On semiparametric regression, most explanatory variables are parametric and nonparametric others are. Independent variables that satisfy parametric assumptions can be predicted by linear regression analysis method, whereas that does not meet the parametric assumptions alleged by the method nonparametrik.Teknik smoothing (smoothing nonparametric regression curve on the components used in this study using uniform kernel function. Estimation of optimal semiparametric regression curve is determined by the size of the weight or bandwidth (h is optimal. Selection of the optimal bandwidth will produce a smooth regression curve estimation in accordance with the pattern data. Selection of the optimum bandwidth is determined based on the criteria that the minimum value of GCV. The purpose of this study was to determine the estimated regression function semiparametric dengue cases using kernel estimators uniform. The response of the data used is old data recovery of patients with Dengue Hemorrhagic Fever (DHF. There are six independent variables such as age (years, body temperature (0C, pulse (beats / min, hematocrit (%, platelets , and duration of fever (day. Age, body temperature, pulse, platelets, and duration of fever is a component of parametric and nonparametric hematocrit is a component. Bandwidth (h the optimal minimum GCV obtained based on the criteria of 0,005. MSE value is generated using multiple linear regression analysis of 0,031. While the semiparametric regression of 0,00437119.

  8. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Burke, TImothy P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martin, William R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-11-19

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.

  9. Kernel density estimation and K-means clustering to profile road accident hotspots.

    Science.gov (United States)

    Anderson, Tessa K

    2009-05-01

    Identifying road accident hotspots is a key role in determining effective strategies for the reduction of high density areas of accidents. This paper presents (1) a methodology using Geographical Information Systems (GIS) and Kernel Density Estimation to study the spatial patterns of injury related road accidents in London, UK and (2) a clustering methodology using environmental data and results from the first section in order to create a classification of road accident hotspots. The use of this methodology will be illustrated using the London area in the UK. Road accident data collected by the Metropolitan Police from 1999 to 2003 was used. A kernel density estimation map was created and subsequently disaggregated by cell density to create a basic spatial unit of an accident hotspot. Appended environmental data was then added to the hotspot cells and using K-means clustering, an outcome of similar hotspots was deciphered. Five groups and 15 clusters were created based on collision and attribute data. These clusters are discussed and evaluated according to their robustness and potential uses in road safety campaigning.

  10. Quantum Einstein gravity. Advancements of heat kernel-based renormalization group studies

    Energy Technology Data Exchange (ETDEWEB)

    Groh, Kai

    2012-10-15

    The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory. As its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained. The constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point. Finally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement

  11. Kernel density estimators of home range: smoothing and the autocorrelation red herring.

    Science.gov (United States)

    Fieberg, John

    2007-04-01

    Two oft-cited drawbacks of kernel density estimators (KDEs) of home range are their sensitivity to the choice of smoothing parameter(s) and their need for independent data. Several simulation studies have been conducted to compare the performance of objective, data-based methods of choosing optimal smoothing parameters in the context of home range and utilization distribution (UD) estimation. Lost in this discussion of choice of smoothing parameters is the general role of smoothing in data analysis, namely, that smoothing serves to increase precision at the cost of increased bias. A primary goal of this paper is to illustrate this bias-variance trade-off by applying KDEs to sampled locations from simulated movement paths. These simulations will also be used to explore the role of autocorrelation in estimating UDs. Autocorrelation can be reduced (1) by increasing study duration (for a fixed sample size) or (2) by decreasing the sampling rate. While the first option will often be reasonable, for a fixed study duration higher sampling rates should always result in improved estimates of space use. Further, KDEs with typical data-based methods of choosing smoothing parameters should provide competitive estimates of space use for fixed study periods unless autocorrelation substantially alters the optimal level of smoothing.

  12. Flat-Top Realized Kernel Estimation of Quadratic Covariation with Non-Synchronous and Noisy Asset Prices

    DEFF Research Database (Denmark)

    Varneskov, Rasmus T.

    . Lastly, two small empirical applications to high frequency stock market data illustrate the bias reduction relative to competing estimators in estimating correlations, realized betas, and mean-variance frontiers, as well as the use of the new estimators in the dynamics of hedging.......This paper extends the class of generalized at-top realized kernels, introduced in Varneskov (2011), to the multivariate case, where quadratic covariation of non-synchronously observed asset prices is estimated in the presence of market microstructure noise that is allowed to exhibit serial...... problems. These transformations are all shown to inherit the desirable asymptotic properties of the generalized at-top realized kernels. A simulation study shows that the class of estimators has a superior finite sample tradeoff between bias and root mean squared error relative to competing estimators...

  13. A framework for parameter estimation and model selection in kernel deep stacking networks.

    Science.gov (United States)

    Welchowski, Thomas; Schmid, Matthias

    2016-06-01

    Kernel deep stacking networks (KDSNs) are a novel method for supervised learning in biomedical research. Belonging to the class of deep learning techniques, KDSNs are based on artificial neural network architectures that involve multiple nonlinear transformations of the input data. Unlike traditional artificial neural networks, KDSNs do not rely on backpropagation algorithms but on an efficient fitting procedure that is based on a series of kernel ridge regression models with closed-form solutions. Although being computationally advantageous, KDSN modeling remains a challenging task, as it requires the specification of a large number of tuning parameters. We propose a new data-driven framework for parameter estimation, hyperparameter tuning, and model selection in KDSNs. The proposed methodology is based on a combination of model-based optimization and hill climbing approaches that do not require the pre-specification of any of the KDSN tuning parameters. We demonstrate the performance of KDSNs by analyzing three medical data sets on hospital readmission of diabetes patients, coronary artery disease, and hospital costs. Our numerical studies show that the run-time of the proposed KDSN methodology is significantly shorter than the respective run-time of grid search strategies for hyperparameter tuning. They also show that KDSN modeling is competitive in terms of prediction accuracy with other state-of-the-art techniques for statistical learning. KDSNs are a computationally efficient approximation of backpropagation-based artificial neural network techniques. Application of the proposed methodology results in a fast tuning procedure that generates KDSN fits having a similar prediction accuracy as other techniques in the field of deep learning. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Estimation of relevant variables on high-dimensional biological patterns using iterated weighted kernel functions.

    Directory of Open Access Journals (Sweden)

    Sergio Rojas-Galeano

    2008-03-01

    Full Text Available The analysis of complex proteomic and genomic profiles involves the identification of significant markers within a set of hundreds or even thousands of variables that represent a high-dimensional problem space. The occurrence of noise, redundancy or combinatorial interactions in the profile makes the selection of relevant variables harder.Here we propose a method to select variables based on estimated relevance to hidden patterns. Our method combines a weighted-kernel discriminant with an iterative stochastic probability estimation algorithm to discover the relevance distribution over the set of variables. We verified the ability of our method to select predefined relevant variables in synthetic proteome-like data and then assessed its performance on biological high-dimensional problems. Experiments were run on serum proteomic datasets of infectious diseases. The resulting variable subsets achieved classification accuracies of 99% on Human African Trypanosomiasis, 91% on Tuberculosis, and 91% on Malaria serum proteomic profiles with fewer than 20% of variables selected. Our method scaled-up to dimensionalities of much higher orders of magnitude as shown with gene expression microarray datasets in which we obtained classification accuracies close to 90% with fewer than 1% of the total number of variables.Our method consistently found relevant variables attaining high classification accuracies across synthetic and biological datasets. Notably, it yielded very compact subsets compared to the original number of variables, which should simplify downstream biological experimentation.

  15. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    Science.gov (United States)

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems.

  16. Sharp Observability Estimates for Heat Equations

    OpenAIRE

    Ervedoza, Sylvain; Zuazua, Enrique

    2011-01-01

    The goal of this article is to derive new estimates for the cost of observability of heat equations. We have developed a new method allowing one to show that when the corresponding wave equation is observable, the heat equation is also observable. This method allows one to describe the explicit dependence of the observability constant on the geometry of the problem (the domain in which the heat process evolves and the observation subdomain). We show that our estimate is sharp in some cases, p...

  17. Kerfdr: a semi-parametric kernel-based approach to local false discovery rate estimation

    Directory of Open Access Journals (Sweden)

    Robin Stephane

    2009-03-01

    Full Text Available Abstract Background The use of current high-throughput genetic, genomic and post-genomic data leads to the simultaneous evaluation of a large number of statistical hypothesis and, at the same time, to the multiple-testing problem. As an alternative to the too conservative Family-Wise Error-Rate (FWER, the False Discovery Rate (FDR has appeared for the last ten years as more appropriate to handle this problem. However one drawback of FDR is related to a given rejection region for the considered statistics, attributing the same value to those that are close to the boundary and those that are not. As a result, the local FDR has been recently proposed to quantify the specific probability for a given null hypothesis to be true. Results In this context we present a semi-parametric approach based on kernel estimators which is applied to different high-throughput biological data such as patterns in DNA sequences, genes expression and genome-wide association studies. Conclusion The proposed method has the practical advantages, over existing approaches, to consider complex heterogeneities in the alternative hypothesis, to take into account prior information (from an expert judgment or previous studies by allowing a semi-supervised mode, and to deal with truncated distributions such as those obtained in Monte-Carlo simulations. This method has been implemented and is available through the R package kerfdr via the CRAN or at http://stat.genopole.cnrs.fr/software/kerfdr.

  18. Predictive analysis and mapping of indoor radon concentrations in a complex environment using kernel estimation: An application to Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Kropat, Georg, E-mail: georg.kropat@chuv.ch [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Bochud, Francois [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Jaboyedoff, Michel [Faculty of Geosciences and Environment, University of Lausanne, GEOPOLIS — 3793, 1015 Lausanne (Switzerland); Laedermann, Jean-Pascal [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Murith, Christophe; Palacios, Martha [Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland); Baechler, Sébastien [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland)

    2015-02-01

    Purpose: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. Methods: We looked at about 240 000 IRC measurements carried out in about 150 000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m{sup 3}. Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. Results: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. Conclusions: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements

  19. Estimation of source infrared spectra profiles of acetylspiramycin active components from troches using kernel independent component analysis

    Science.gov (United States)

    Wang, Guoqing; Ding, Qingzhu; Sun, Yu'an; He, Linghao; Sun, Xiaoli

    2008-08-01

    Kernel independent component analysis (KICA), a kind of independent component analysis (ICA) algorithms based on kernel, was preliminarily investigated for blind source separation (BSS) of source spectra profiles from troches. The robustness of different ICA algorithms (KICA, FastICA and Infomax) was first checked by using them in the retrieval of source infrared (IR), ultraviolet (UV) and mass spectra (MS) from synthetic mixtures. It was found that KICA is the most robust method for retrieval of source spectra profiles. KICA algorithm is subsequently adopted in the analysis of diffuse reflection IR of acetylspiramycin (ASPM) troches. It is observed that KICA is able to isolate the theoretically predicted spectral features corresponding to the ASPM active components, excipients and other minor components as different independent (spectral) component. A troche can be authenticated and semi-quantified using the estimated ICs. KICA is an useful method for estimation of source spectral features of molecules with different geometry and stoichiometry, while features belonging to very similar molecules remain grouped.

  20. The Effect of Moisture Content and Temperature on the Specific Heat Capacity of Nut and Kernel of Two Iranian Pistachio Varieties

    Directory of Open Access Journals (Sweden)

    A.R Salari Kia

    2014-04-01

    Full Text Available Pistachio has a special ranking among Iranian agricultural products. Iran is known as the largest producer and exporter of pistachio in the world. Agricultural products are imposed under different thermal treatments during storage and processing. Designing all these processes requires thermal parameters of the products such as specific heat capacity. Regarding the importance of pistachio processing as an exportable product, in this study the specific heat capacity of nut and kernel of two varieties of Iranian pistachio (Kalle-Ghochi and Badami were investigated at four levels of moisture content (initial moisture content (5%, 15%, 25% and 40% w.b. and three levels of temperature (40, 50 and 60°C. In both varieties, the differences between the data were significant at the 1% of probability; however, the effect of moisture content was greater than that of temperature. The results indicated that the specific heat capacity of both nuts and kernels increase logarithmically with increase of moisture content and also increase linearly with increase of temperature. This parameter has altered for nut and kernel of Kalle-Ghochi and Badami varieties within the range of 1.039-2.936 kJ kg-1 K-1, 1.236-3.320 kJ kg-1 K-1, 0.887-2.773 kJ kg-1 K-1 and 0.811-2.914 kJ kg-1 K-1, respectively. Moreover, for any given level of temperature, the specific heat capacity of kernels was higher than that of nuts. Finally, regression models with high R2 values were developed to predict the specific heat capacity of pistachio varieties as a function of moisture content and temperature

  1. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  2. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...

  3. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  4. Combining Lactic Acid Spray with Near-Infrared Radiation Heating To Inactivate Salmonella enterica Serovar Enteritidis on Almond and Pine Nut Kernels.

    Science.gov (United States)

    Ha, Jae-Won; Kang, Dong-Hyun

    2015-07-01

    The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  5. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  6. On the asymptotic expansion of the Bergman kernel

    Science.gov (United States)

    Seto, Shoo

    Let (L, h) → (M, o) be a polarized Kahler manifold. We define the Bergman kernel for H0(M, Lk), holomorphic sections of the high tensor powers of the line bundle L. In this thesis, we will study the asymptotic expansion of the Bergman kernel. We will consider the on-diagonal, near-diagonal and far off-diagonal, using L2 estimates to show the existence of the asymptotic expansion and computation of the coefficients for the on and near-diagonal case, and a heat kernel approach to show the exponential decay of the off-diagonal of the Bergman kernel for noncompact manifolds assuming only a lower bound on Ricci curvature and C2 regularity of the metric.

  7. Benchmarks for detecting 'breakthroughs' in clinical trials: empirical assessment of the probability of large treatment effects using kernel density estimation.

    Science.gov (United States)

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin

    2014-10-21

    To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Surface renewal method for estimating sensible heat flux | Mengistu ...

    African Journals Online (AJOL)

    For short canopies, latent energy flux may be estimated using a shortened surface energy balance from measurements of sensible and soil heat flux and the net irradiance at the surface. The surface renewal (SR) method for estimating sensible heat, latent energy, and other scalar fluxes has the advantage over other ...

  9. Repeatedly heated palm kernel oil induces hyperlipidemia, atherogenic indices and hepatorenal toxicity in rats: Beneficial role of virgin coconut oil supplementation.

    Science.gov (United States)

    Famurewa, Ademola C; Nwankwo, Onyebuchi E; Folawiyo, Abiola M; Igwe, Emeka C; Epete, Michael A; Ufebe, Odomero G

    2017-01-01

    The literature reports that the health benefits of vegetable oil can be deteriorated by repeated heating, which leads to lipid oxidation and the formation of free radicals. Virgin coconut oil (VCO) is emerging as a functional food oil and its health benefits are attributed to its potent polyphenolic compounds. We investigated the beneficial effect of VCO supplementation on lipid profile, liver and kidney markers in rats fed repeatedly heated palm kernel oil (HPO). Rats were divided into four groups (n = 5). The control group rats were fed with   a normal diet; group 2 rats were fed a 10% VCO supplemented diet; group 3 administered 10 ml HPO/kg b.w. orally; group 4 were fed 10% VCO + 10 ml HPO/kg for 28 days. Subsequently, serum markers of liver damage (ALT, AST, ALP and albumin), kidney damage (urea, creatinine and uric acid), lipid profile and lipid ratios as cardiovascular risk indices were evaluated. HPO induced a significant increase in serum markers of liver and kidney damage as well as con- comitant lipid abnormalities and a marked reduction in serum HDL-C. The lipid ratios evaluated for atherogenic and coronary risk indices in rats administered HPO only were remarkably higher than control. It was observed that VCO supplementation attenuated the biochemical alterations, including the indices of cardiovascular risks. VCO supplementation demonstrates beneficial health effects against HPO-induced biochemical alterations in rats. VCO may serve to modulate the adverse effects associated with consumption of repeatedly heated palm kernel oil.

  10. Comparative study of species sensitivity distributions based on non-parametric kernel density estimation for some transition metals.

    Science.gov (United States)

    Wang, Ying; Feng, Chenglian; Liu, Yuedan; Zhao, Yujie; Li, Huixian; Zhao, Tianhui; Guo, Wenjing

    2017-02-01

    Transition metals in the fourth period of the periodic table of the elements are widely widespread in aquatic environments. They could often occur at certain concentrations to cause adverse effects on aquatic life and human health. Generally, parametric models are mostly used to construct species sensitivity distributions (SSDs), which result in comparison for water quality criteria (WQC) of elements in the same period or group of the periodic table might be inaccurate and the results could be biased. To address this inadequacy, the non-parametric kernel density estimation (NPKDE) with its optimal bandwidths and testing methods were developed for establishing SSDs. The NPKDE was better fit, more robustness and better predicted than conventional normal and logistic parametric density estimations for constructing SSDs and deriving acute HC5 and WQC for transition metals in the fourth period of the periodic table. The decreasing sequence of HC5 values for the transition metals in the fourth period was Ti > Mn > V > Ni > Zn > Cu > Fe > Co > Cr(VI), which were not proportional to atomic number in the periodic table, and for different metals the relatively sensitive species were also different. The results indicated that except for physical and chemical properties there are other factors affecting toxicity mechanisms of transition metals. The proposed method enriched the methodological foundation for WQC. Meanwhile, it also provided a relatively innovative, accurate approach for the WQC derivation and risk assessment of the same group and period metals in aquatic environments to support protection of aquatic organisms. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...... and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated...

  12. Asymptotic Approximations to the Bias and Variance of a Kernel-Type Estimator of the Intensity of the Cyclic Poisson Process with the Linear Trend

    Directory of Open Access Journals (Sweden)

    I Wayan Mangku

    2012-02-01

    Full Text Available From the previous research, a kernel-type estimator of the intensity ofthe cyclic Poisson process with the linear trend has been constructed using a singlerealization of the Poisson process observed in a bounded interval. This proposedestimator has been proved to be consistent as the size of the observation intervaltends to innity. In this paper, asymptotic approximations to its bias, variance andMSE (Mean-Squared-Error are computed. Asymptotically optimal bandwidth isalso derived.

  13. Revised Estimate of Earth's Surface Heat Flow: 47 +- 2 TW

    Science.gov (United States)

    Davies, J. H.; Davies, D. R.

    2012-04-01

    Earth's surface heat flow provides a fundamental constraint on solid Earth dynamics. However, deriving an estimate of the total surface heat flux is complex, due to the inhomogeneous distribution of heat flow measurements and difficulties in measuring heat flux in young oceanic crust, arising due to hydrothermal circulation. A database of 38347 measurements (provided by G. Laske & G. Masters), representing a 55% increase on the number of measurements used previously, and the methods of Geographical Information Science (GIS), is used to derive a revised estimate of Earth's surface heat flux (Davies & Davies, 2010). To account for hydrothermal circulation in young oceanic crust, we use a model estimate of the heat flux, following the work of Jaupart et al., 2007; while for the rest of the globe, in an attempt to overcome the inhomogeneous distribution of measurements, we develop an average for different geological units. Two digital geology data sets are used to define the global geology: (i) continental geology - Hearn et al., 2003; and (ii) the global data-set of CCGM - Commission de la Carte Géologique du Monde, 2000. This leads to > 93,000 polygons defining Earth's geology. The influence of clustering is limited by intersecting the geology polygons with a 1 by 1 degree (at the equator) equal area grid. The average heat flow is evaluated for each geology class. The contribution of each geology class to the global surface heat flow is derived by multiplying this estimated average surface heat flux with the area of that geology class. The surface heat flow contributions of all the geology classes are summed. For Antarctica we use an estimate based on depth to Curie temperature and include a 1TW contribution from hot-spots in young ocean age. Geology classes with less than 50 readings are excluded. The raw data suggests that this method of correlating heat flux with geology has some power. Our revised estimate for Earth's global surface heat flux is 47 ± 2 TW

  14. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    Science.gov (United States)

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the

  15. Estimation of kernels mass ratio to total in-shell peanuts using low-cost RF impedance meter

    Science.gov (United States)

    Kandala, Chari V.; Sundaram, Jaya; Hinson, Brad

    2010-04-01

    In this study percentage of total kernel mass within a given mass of in-shell peanuts was determined nondestructively using a low-cost RF impedance meter. Peanut samples were divided into two groups, one the calibration and the other the validation group. Each group contained 50 samples of about 100 g of peanuts. Capacitance (C), phase angle (θ) and impedance (Z) measurements on in-shell peanut samples were made at frequencies 1 MHz, 5 MHz and 9 MHz. Ten measurements on each sample set were made, to minimize the errors due to the orientation of the peanuts as they settle between the electrodes of the impedance meter, by emptying and refilling the samples after each measurement. After completing the measurements on each set, the peanuts from that set were shelled, kernels were separated and weighed. Multi linear regression (MLR) calibration equation was developed by correlating the percentage of the kernel mass in a given peanut sample set with the measured capacitance, impedance and phase angle values. This equation was used to predict the kernel mass ratio of the samples from the validation group. The fitness of the MLR equation was verified using Standard Error of Prediction (SEP) and Root Mean Square Error of Prediction (RMSEP). Also, the predictability of total kernel mass ratio was calculated by comparing the mass ratio predicted using MLR model with the actual mass ratio determined using the conventional standard method of visual determination.

  16. Estimating the Heat of Formation of Foodstuffs and Biomass

    Energy Technology Data Exchange (ETDEWEB)

    Burnham, Alan K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-11-23

    Calorie estimates for expressing the energy content of food are common, however they are inadequate for the purpose of estimating the chemically defined heat of formation of foodstuffs for two reasons. First, they assume utilization factors by the body.1,2,3 Second, they are usually based on average values for their components. The best way to solve this problem would be to measure the heat of combustion of each material of interest. The heat of formation can then be calculated from the elemental composition and the heats of formation of CO2, H2O, and SO2. However, heats of combustion are not always available. Sometimes elemental analysis only is available, or in other cases, a breakdown into protein, carbohydrates, and lipids. A simple way is needed to calculate the heat of formation from various sorts of data commonly available. This report presents improved correlations for relating the heats of combustion and formation to the elemental composition, moisture content, and ash content. The correlations are also able to calculate heats of combustion of carbohydrates, proteins, and lipids individually, including how they depend on elemental composition. The starting point for these correlations are relationships commonly used to estimate the heat of combustion of fossil fuels, and they have been modified slightly to agree better with the ranges of chemical structures found in foodstuffs and biomass.

  17. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  18. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  19. Estimation of mass ratio of the total kernels within a sample of in-shell peanuts using RF Impedance Method

    Science.gov (United States)

    It would be useful to know the total kernel mass within a given mass of peanuts (mass ratio) while the peanuts are bought or being processed. In this work, the possibility of finding this mass ratio while the peanuts were in their shells was investigated. Capacitance, phase angle and dissipation fa...

  20. Estimating local heat transfer coefficients from thin wall temperature measurements

    Science.gov (United States)

    Gazizov, I. M.; Davletshin, I. A.; Paereliy, A. A.

    2017-09-01

    An approach to experimental estimation of local heat transfer coefficient on a plane wall has been described. The approach is based on measurements of heat-transfer fluid and wall temperatures during some certain time of wall cooling. The wall was a thin plate, a printed circuit board, made of composite epoxy material covered with a copper layer. The temperature field can be considered uniform across the plate thickness when heat transfer is moderate and thermal resistance of the plate in transversal direction is low. This significantly simplifies the heat balance written for the wall sections that is used to estimate the heat transfer coefficient. The copper layer on the plate etched to form a single strip acted as resistance thermometers that measured the local temperature of the wall.

  1. Do we really need a large number of particles to simulate bimolecular reactive transport with random walk methods? A kernel density estimation approach

    Science.gov (United States)

    Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-12-01

    Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a

  2. Kernel-based variance component estimation and whole-genome prediction of pre-corrected phenotypes and progeny tests for dairy cow health traits

    Science.gov (United States)

    Morota, Gota; Boddhireddy, Prashanth; Vukasinovic, Natascha; Gianola, Daniel; DeNise, Sue

    2014-01-01

    Prediction of complex trait phenotypes in the presence of unknown gene action is an ongoing challenge in animals, plants, and humans. Development of flexible predictive models that perform well irrespective of genetic and environmental architectures is desirable. Methods that can address non-additive variation in a non-explicit manner are gaining attention for this purpose and, in particular, semi-parametric kernel-based methods have been applied to diverse datasets, mostly providing encouraging results. On the other hand, the gains obtained from these methods have been smaller when smoothed values such as estimated breeding value (EBV) have been used as response variables. However, less emphasis has been placed on the choice of phenotypes to be used in kernel-based whole-genome prediction. This study aimed to evaluate differences between semi-parametric and parametric approaches using two types of response variables and molecular markers as inputs. Pre-corrected phenotypes (PCP) and EBV obtained for dairy cow health traits were used for this comparison. We observed that non-additive genetic variances were major contributors to total genetic variances in PCP, whereas additivity was the largest contributor to variability of EBV, as expected. Within the kernels evaluated, non-parametric methods yielded slightly better predictive performance across traits relative to their additive counterparts regardless of the type of response variable used. This reinforces the view that non-parametric kernels aiming to capture non-linear relationships between a panel of SNPs and phenotypes are appealing for complex trait prediction. However, like past studies, the gain in predictive correlation was not large for either PCP or EBV. We conclude that capturing non-additive genetic variation, especially epistatic variation, in a cross-validation framework remains a significant challenge even when it is important, as seems to be the case for health traits in dairy cows. PMID:24715901

  3. Kernel-based variance component estimation and whole-genome prediction of pre-corrected phenotypes and progeny tests for dairy cow health traits

    Directory of Open Access Journals (Sweden)

    Gota eMorota

    2014-03-01

    Full Text Available Prediction of complex trait phenotypes in the presence of unknown gene action is an ongoing challenge in animals, plants, and humans. Development of flexible predictive models that perform well irrespective of genetic and environmental architectures is desirable. Methods that can address non-additive variation in a non-explicit manner are gaining attention for this purpose and, in particular, semi-parametric kernel-based methods have been applied to diverse datasets, mostly providing encouraging results. On the other hand, the gains obtained from these methods have been smaller when smoothed values such as estimated breeding value (EBV have been used as response variables. However, less emphasis has been placed on the choice of phenotypes to be used in kernel-based whole-genome prediction. This study aimed to evaluate differences between semi-parametric and parametric approaches using two types of response variables and molecular markers as inputs. Pre-corrected phenotypes (PCP and EBV obtained for dairy cow health traits were used for this comparison. We observed that non-additive genetic variances were major contributors to total genetic variances in PCP, whereas additivity was the largest contributor to variability of EBV, as expected. Within the kernels evaluated, non-parametric methods yielded slightly better predictive performance across traits relative to their additive counterparts regardless of the type of response variable used. This reinforces the view that non-parametric kernels aiming to capture non-linear relationships between a panel of SNPs and phenotypes are appealing for complex trait prediction. However, like past studies, the gain in predictive correlation was not large for either PCP or EBV. We conclude that capturing non-additive genetic variation, especially epistatic variation, in a cross-validation framework remains a significant challenge even when it is important, as seems to be the case for health traits in dairy cows.

  4. Coupling heat and chemical tracer experiments for estimating heat transfer parameters in shallow alluvial aquifers.

    Science.gov (United States)

    Wildemeersch, S; Jamin, P; Orban, P; Hermans, T; Klepikova, M; Nguyen, F; Brouyère, S; Dassargues, A

    2014-11-15

    Geothermal energy systems, closed or open, are increasingly considered for heating and/or cooling buildings. The efficiency of such systems depends on the thermal properties of the subsurface. Therefore, feasibility and impact studies performed prior to their installation should include a field characterization of thermal properties and a heat transfer model using parameter values measured in situ. However, there is a lack of in situ experiments and methodology for performing such a field characterization, especially for open systems. This study presents an in situ experiment designed for estimating heat transfer parameters in shallow alluvial aquifers with focus on the specific heat capacity. This experiment consists in simultaneously injecting hot water and a chemical tracer into the aquifer and monitoring the evolution of groundwater temperature and concentration in the recovery well (and possibly in other piezometers located down gradient). Temperature and concentrations are then used for estimating the specific heat capacity. The first method for estimating this parameter is based on a modeling in series of the chemical tracer and temperature breakthrough curves at the recovery well. The second method is based on an energy balance. The values of specific heat capacity estimated for both methods (2.30 and 2.54MJ/m(3)/K) for the experimental site in the alluvial aquifer of the Meuse River (Belgium) are almost identical and consistent with values found in the literature. Temperature breakthrough curves in other piezometers are not required for estimating the specific heat capacity. However, they highlight that heat transfer in the alluvial aquifer of the Meuse River is complex and contrasted with different dominant process depending on the depth leading to significant vertical heat exchange between upper and lower part of the aquifer. Furthermore, these temperature breakthrough curves could be included in the calibration of a complex heat transfer model for

  5. Heat Load Estimator for Smoothing Pulsed Heat Loads on Supercritical Helium Loops

    Science.gov (United States)

    Hoa, C.; Lagier, B.; Rousset, B.; Bonnay, P.; Michel, F.

    Superconducting magnets for fusion are subjected to large variations of heat loads due to cycling operation of tokamaks. The cryogenic system shall operate smoothly to extract the pulsed heat loads by circulating supercritical helium into the coils and structures. However the value of the total heat loads and its temporal variation are not known before the plasma scenario starts. A real-time heat load estimator is of interest for the process control of the cryogenic system in order to anticipate the arrival of pulsed heat loads to the refrigerator and finally to optimize the operation of the cryogenic system. The large variation of the thermal loads affects the physical parameters of the supercritical helium loop (pressure, temperature, mass flow) so those signals can be used for calculating instantaneously the loads deposited into the loop. The methodology and algorithm are addressed in the article for estimating the heat load deposition before it reaches the refrigerator. The CEA patented process control has been implemented in a Programmable Logic Controller (PLC) and has been successfully validated on the HELIOS test facility at CEA Grenoble. This heat load estimator is complementary to pulsed load smoothing strategies providing an estimation of the optimized refrigeration power. It can also effectively improve the process control during the transient between different operating modes by adjusting the refrigeration power to the need. This way, the heat load estimator participates to the safe operation of the cryogenic system.

  6. A two-dimensional inverse heat conduction problem for estimating heat source

    OpenAIRE

    Shidfar, A.; Zakeri, A.; Neisi, A.

    2005-01-01

    This note considers the problem of estimating unknown time-varying strength of the temporal-dependent heat source, from measurements of the temperature inside the square domain, when the prior knowledge of the source functions is not available. This problem is an inverse heat conduction problem. In this process, the direct problem will be solved by using the heat fundamental solution. Then a sequential algorithm is developed to solve a Volterra integral equation, which has been produced by...

  7. Higher-order Gaussian kernel in bootstrap boosting algorithm ...

    African Journals Online (AJOL)

    The bootstrap boosting algorithm is a bias reduction scheme. The adoption of higher-order Gaussian kernel in a bootstrap boosting algorithm in kernel density estimation was investigated. The algorithm used the higher-order. Gaussian kernel instead of the regular fixed kernels. A comparison of the scheme with existing ...

  8. Radiative heat transfer estimation in pipes with various wall emissivities

    Science.gov (United States)

    Robin, Langebach; Christoph, Haberstroh

    2017-02-01

    Radiative heat transfer is usually of substantial importance in cryogenics when systems are designed and thermal budgeting is carried out. However, the contribution of pipes is commonly assumed to be comparably low since the warm and cold ends as well as their cross section are fairly small. Nevertheless, for a first assessment of each pipe rough estimates are always appreciated. In order to estimate the radiative heat transfer with traditional “paper and pencil“ methods there is only one analytical case available in literature - the case of plane-parallel plates. This case can only be used to calculate the theoretical lower and the upper asymptotic values of the radiative heat transfer, since pipe wall radiation properties are not taken into account. For this paper we investigated the radiative heat transfer estimation in pipes with various wall emissivities with the help of numerical simulations. Out of a number of calculation series we could gain an empirical extension for the used approach of plane-parallel plates. The model equation can be used to carry out enhanced paper and pencil estimations for the radiative heat transfer through pipes without demanding numerical simulations.

  9. An inverse hyperbolic heat conduction problem in estimating surface heat flux by the conjugate gradient method

    Energy Technology Data Exchange (ETDEWEB)

    Huang, C.-H.; Wu, H.-H. [Department of Systems and Naval Mechatronic Engineering National Cheng Kung University Tainan, Taiwan 701 (China)

    2006-09-21

    In the present study an inverse hyperbolic heat conduction problem is solved by the conjugate gradient method (CGM) in estimating the unknown boundary heat flux based on the boundary temperature measurements. Results obtained in this inverse problem will be justified based on the numerical experiments where three different heat flux distributions are to be determined. Results show that the inverse solutions can always be obtained with any arbitrary initial guesses of the boundary heat flux. Moreover, the drawbacks of the previous study for this similar inverse problem, such as (1) the inverse solution has phase error and (2) the inverse solution is sensitive to measurement error, can be avoided in the present algorithm. Finally, it is concluded that accurate boundary heat flux can be estimated in this study.

  10. Estimation of transient heat transfer coefficients in multidimensional problems by using inverse heat transfer methods

    Science.gov (United States)

    Osman, Arafa Mohamed

    1987-05-01

    The inverse heat transfer problem is one of considerable practical interest in the analysis and design of experimental heat transfer investigations. The analytical and experimental investigation of the inverse heat transfer coefficients in multi-dimensional convective heat transfer applications is examined. An application considered is the sudden quenching of a hot solid in a cold liquid. Other applications include thermal analysis of forced convection over impulsively started solid bodies and investigation of short duration wind tunnel experiments. The primary aim is to describe methods and algorithms for the solution of the ill-posed inverse heat transfer coefficient problem. The solution method used is an extension of the sequential future-information method of Beck. Numerical experiments are conducted for a systematic investigation of the developed algorithms on selected heat transfer coefficient test cases. The overall objective of the experimental work is to investigate the early transients in the heat transfer coefficients from spheres in one- and two-dimensional quenching experiments. Several experiments were performed by plunging hollow spheres in either ethylene glycol or water. The developed methods are used for the analysis of the quenching experiments for the estimation of the transient heat transfer coefficients. Analysis of the results indicate that the transient inverse technique has the capability of estimating early transients and subsequent quasi-steady state values of the heat transfer coefficients in a single transient experiment.

  11. Anthropogenic heat flux estimation from space: first results

    Science.gov (United States)

    Chrysoulakis, Nektarios; Heldens, Wieke; Gastellu-Etchegorry, Jean-Philippe; Grimmond, Sue; Feigenwinter, Christian; Lindberg, Fredrik; Del Frate, Fabio; Klostermann, Judith; Mitraka, Zina; Esch, Thomas; Albitar, Ahmad; Gabey, Andrew; Parlow, Eberhard; Olofson, Frans

    2016-04-01

    While Earth Observation (EO) has made significant advances in the study of urban areas, there are several unanswered science and policy questions to which it could contribute. To this aim the recently launched Horizon 2020 project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of EO to retrieve anthropogenic heat flux, as a key component in the urban energy budget. The anthropogenic heat flux is the heat flux resulting from vehicular emissions, space heating and cooling of buildings, industrial processing and the metabolic heat release by people. Optical, thermal and SAR data from existing satellite sensors are used to improve the accuracy of the radiation balance spatial distribution calculation, using also in-situ reflectance measurements of urban materials are for calibration. EO-based methods are developed for estimating turbulent sensible and latent heat fluxes, as well as urban heat storage flux and anthropogenic heat flux spatial patterns at city scale and local scale by employing an energy budget closure approach. Independent methods and models are engaged to evaluate the derived products and statistical analyses provide uncertainty measures as well. Ultimate goal of the URBANFLUXES is to develop a highly automated method for estimating urban energy budget components to use with Copernicus Sentinel data, enabling its integration into applications and operational services. Thus, URBANFLUXES prepares the ground for further innovative exploitation of European space data in scientific activities (i.e. Earth system modelling and climate change studies in cities) and future and emerging applications (i.e. sustainable urban planning) by exploiting the improved data quality, coverage and revisit times of the Copernicus data. The URBANFLUXES products will therefore have the potential to support both sustainable planning strategies to improve the quality of life in cities, as well as Earth system models to

  12. Estimation of convection heat and mass transfer coefficients for ...

    African Journals Online (AJOL)

    Estimation of convection heat and mass transfer coefficients for constant-rate drying period during tape casting. Y T Puyate. Abstract. No Abstract. Global Journal of Engineering Research Vol. 6 (1) 2007: pp. 75-77. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  13. Recov'Heat: An estimation tool of urban waste heat recovery potential in sustainable cities

    Science.gov (United States)

    Goumba, Alain; Chiche, Samuel; Guo, Xiaofeng; Colombert, Morgane; Bonneau, Patricia

    2017-02-01

    Waste heat recovery is considered as an efficient way to increase carbon-free green energy utilization and to reduce greenhouse gas emission. Especially in urban area, several sources such as sewage water, industrial process, waste incinerator plants, etc., are still rarely explored. Their integration into a district heating system providing heating and/or domestic hot water could be beneficial for both energy companies and local governments. EFFICACITY, a French research institute focused on urban energy transition, has developed an estimation tool for different waste heat sources potentially explored in a sustainable city. This article presents the development method of such a decision making tool which, by giving both energetic and economic analysis, helps local communities and energy service companies to make preliminary studies in heat recovery projects.

  14. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  15. An inverse heat conduction problem of estimating thermal conductivity

    CERN Document Server

    Shidfar, S

    2002-01-01

    In this paper we consider an inverse heat conduction problem. We define the inverse and direct problem and solve the direct problem by method of Lines. We estimated the thermal conductivity k(u) which is assumed k(u)=k sub o +k sub 1 u+...+k sub N u sup N and contiguously in the direction normal to the surface of a sample plate.

  16. Measurement and mapping of socially perceived impact related to pollutant industries using a kernel density estimator and GIS. The case study of Ventanas industrial site (Chile

    Directory of Open Access Journals (Sweden)

    Antonio Moreno Jiménez

    2017-08-01

    Full Text Available The environmental effects of human activities shape the so-called externality fields around them, portrayed either as immissions and physical impacts, or as negative experiences affecting population. The measurement of this social “bad-being” and its spatial expression is the main issue of this contribution, where the case of residents near a large industrial complex in Chile is tackled. To this end a home survey, lately geo-referenced, was designed and achieved to determine the perception of various environmental impacts. To elucidate the amount of socio-spatial “bad-being” a kernel density estimator was applied, provided it enables to obtain GIS based surfaces and expressive maps representing the various negative experiences, and a synthetic index of them too. In this way it is intended to provide a new way to capture, both in quantity and place, these elusive effects, which might be applicable in environmental impact assessment.

  17. Methods to Estimate Acclimatization to Urban Heat Island Effects on Heat- and Cold-Related Mortality.

    Science.gov (United States)

    Milojevic, Ai; Armstrong, Ben G; Gasparrini, Antonio; Bohnenstengel, Sylvia I; Barratt, Benjamin; Wilkinson, Paul

    2016-07-01

    Investigators have examined whether heat mortality risk is increased in neighborhoods subject to the urban heat island (UHI) effect but have not identified degrees of difference in susceptibility to heat and cold between cool and hot areas, which we call acclimatization to the UHI. We developed methods to examine and quantify the degree of acclimatization to heat- and cold-related mortality in relation to UHI anomalies and applied these methods to London, UK. Case-crossover analyses were undertaken on 1993-2006 mortality data from London UHI decile groups defined by anomalies from the London average of modeled air temperature at a 1-km grid resolution. We estimated how UHI anomalies modified excess mortality on cold and hot days for London overall and displaced a fixed-shape temperature-mortality function ("shifted spline" model). We also compared the observed associations with those expected under no or full acclimatization to the UHI. The relative risk of death on hot versus normal days differed very little across UHI decile groups. A 1°C UHI anomaly multiplied the risk of heat death by 1.004 (95% CI: 0.950, 1.061) (interaction rate ratio) compared with the expected value of 1.070 (1.057, 1.082) if there were no acclimatization. The corresponding UHI interaction for cold was 1.020 (0.979, 1.063) versus 1.030 (1.026, 1.034) (actual versus expected under no acclimatization, respectively). Fitted splines for heat shifted little across UHI decile groups, again suggesting acclimatization. For cold, the splines shifted somewhat in the direction of no acclimatization, but did not exclude acclimatization. We have proposed two analytical methods for estimating the degree of acclimatization to the heat- and cold-related mortality burdens associated with UHIs. The results for London suggest relatively complete acclimatization to the UHI effect on summer heat-related mortality, but less clear evidence for cold-related mortality. Milojevic A, Armstrong BG, Gasparrini A

  18. Online Capacity Estimation of Lithium-Ion Batteries Based on Novel Feature Extraction and Adaptive Multi-Kernel Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2015-11-01

    Full Text Available Prognostics is necessary to ensure the reliability and safety of lithium-ion batteries for hybrid electric vehicles or satellites. This process can be achieved by capacity estimation, which is a direct fading indicator for assessing the state of health of a battery. However, the capacity of a lithium-ion battery onboard is difficult to monitor. This paper presents a data-driven approach for online capacity estimation. First, six novel features are extracted from cyclic charge/discharge cycles and used as indirect health indicators. An adaptive multi-kernel relevance machine (MKRVM based on accelerated particle swarm optimization algorithm is used to determine the optimal parameters of MKRVM and characterize the relationship between extracted features and battery capacity. The overall estimation process comprises offline and online stages. A supervised learning step in the offline stage is established for model verification to ensure the generalizability of MKRVM for online application. Cross-validation is further conducted to validate the performance of the proposed model. Experiment and comparison results show the effectiveness, accuracy, efficiency, and robustness of the proposed approach for online capacity estimation of lithium-ion batteries.

  19. The Impacts of Heating Strategy on Soil Moisture Estimation Using Actively Heated Fiber Optics.

    Science.gov (United States)

    Dong, Jianzhi; Agliata, Rosa; Steele-Dunne, Susan; Hoes, Olivier; Bogaard, Thom; Greco, Roberto; van de Giesen, Nick

    2017-09-13

    Several recent studies have highlighted the potential of Actively Heated Fiber Optics (AHFO) for high resolution soil moisture mapping. In AHFO, the soil moisture can be calculated from the cumulative temperature ( T cum ), the maximum temperature ( T max ), or the soil thermal conductivity determined from the cooling phase after heating ( λ ). This study investigates the performance of the T cum , T max and λ methods for different heating strategies, i.e., differences in the duration and input power of the applied heat pulse. The aim is to compare the three approaches and to determine which is best suited to field applications where the power supply is limited. Results show that increasing the input power of the heat pulses makes it easier to differentiate between dry and wet soil conditions, which leads to an improved accuracy. Results suggest that if the power supply is limited, the heating strength is insufficient for the λ method to yield accurate estimates. Generally, the T cum and T max methods have similar accuracy. If the input power is limited, increasing the heat pulse duration can improve the accuracy of the AHFO method for both of these techniques. In particular, extending the heating duration can significantly increase the sensitivity of T cum to soil moisture. Hence, the T cum method is recommended when the input power is limited. Finally, results also show that up to 50% of the cable temperature change during the heat pulse can be attributed to soil background temperature, i.e., soil temperature changed by the net solar radiation. A method is proposed to correct this background temperature change. Without correction, soil moisture information can be completely masked by the background temperature error.

  20. Simulation of fluid, heat transport to estimate desert stream infiltration

    Science.gov (United States)

    Kulongoski, J.T.; Izbicki, J.A.

    2008-01-01

    In semiarid regions, the contribution of infiltration from intermittent streamflow to ground water recharge may be quantified by comparing simulations of fluid and heat transport beneath stream channels to observed ground temperatures. In addition to quantifying natural recharge, streamflow infiltration estimates provide a means to characterize the physical properties of stream channel sediments and to identify suitable locations for artificial recharge sites. Rates of winter streamflow infiltration along stream channels are estimated based on the cooling effect of infiltrated water on streambed sediments, combined with the simulation of two-dimensional fluid and heat transport using the computer program VS2DH. The cooling effect of ground water is determined by measuring ground temperatures at regular intervals beneath stream channels and nearby channel banks in order to calculate temperature-depth profiles. Additional data inputs included the physical, hydraulic, and thermal properties of unsaturated alluvium, and monthly ground temperatures measurements over an annual cycle. Observed temperatures and simulation results can provide estimates of the minimum threshold for deep infiltration, the variability of infiltration along stream channels, and also the frequency of infiltration events.

  1. Pipeline heating method based on optimal control and state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu

    2010-07-01

    In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem

  2. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  3. SPECTRAL data-based estimation of soil heat flux

    Science.gov (United States)

    Singh, R.K.; Irmak, A.; Walter-Shea, Elizabeth; Verma, S.B.; Suyker, A.E.

    2011-01-01

    Numerous existing spectral-based soil heat flux (G) models have shown wide variation in performance for maize and soybean cropping systems in Nebraska, indicating the need for localized calibration and model development. The objectives of this article are to develop a semi-empirical model to estimate G from a normalized difference vegetation index (NDVI) and net radiation (Rn) for maize (Zea mays L.) and soybean (Glycine max L.) fields in the Great Plains, and present the suitability of the developed model to estimate G under similar and different soil and management conditions. Soil heat fluxes measured in both irrigated and rainfed fields in eastern and south-central Nebraska were used for model development and validation. An exponential model that uses NDVI and Rn was found to be the best to estimate G based on r2 values. The effect of geographic location, crop, and water management practices were used to develop semi-empirical models under four case studies. Each case study has the same exponential model structure but a different set of coefficients and exponents to represent the crop, soil, and management practices. Results showed that the semi-empirical models can be used effectively for G estimation for nearby fields with similar soil properties for independent years, regardless of differences in crop type, crop rotation, and irrigation practices, provided that the crop residue from the previous year is more than 4000 kg ha-1. The coefficients calibrated from particular fields can be used at nearby fields in order to capture temporal variation in G. However, there is a need for further investigation of the models to account for the interaction effects of crop rotation and irrigation. Validation at an independent site having different soil and crop management practices showed the limitation of the semi-empirical model in estimating G under different soil and environment conditions.

  4. Series load induction heating inverter state estimator using Kalman filter

    Directory of Open Access Journals (Sweden)

    Szelitzky T.

    2011-12-01

    Full Text Available LQR and H2 controllers require access to the states of the controlled system. The method based on description function with Fourier series results in a model with immeasurable states. For this reason, we proposed a Kalman filter based state estimator, which not only filters the input signals, but also computes the unobservable states of the system. The algorithm of the filter was implemented in LabVIEW v8.6 and tested on recorded data obtained from a 10-40 kHz series load frequency controlled induction heating inverter.

  5. Kernel Factory: An Ensemble of Kernel Machines

    OpenAIRE

    M. BALLINGS; D. VAN DEN POEL

    2012-01-01

    We propose an ensemble method for kernel machines. The training data is randomly split into a number of mutually exclusive partitions defined by a row and column parameter. Each partition forms an input space and is transformed by a kernel function into a kernel matrix K. Subsequently, each K is used as training data for a base binary classifier (Random Forest). This results in a number of predictions equal to the number of partitions. A weighted average combines the predictions into one fina...

  6. Estimation of respiratory heat flows in prediction of heat strain among Taiwanese steel workers

    Science.gov (United States)

    Chen, Wang-Yi; Juang, Yow-Jer; Hsieh, Jung-Yu; Tsai, Perng-Jy; Chen, Chen-Peng

    2017-01-01

    International Organization for Standardization 7933 standard provides evaluation of required sweat rate (RSR) and predicted heat strain (PHS). This study examined and validated the approximations in these models estimating respiratory heat flows (RHFs) via convection ( C res) and evaporation ( E res) for application to Taiwanese foundry workers. The influence of change in RHF approximation to the validity of heat strain prediction in these models was also evaluated. The metabolic energy consumption and physiological quantities of these workers performing at different workloads under elevated wet-bulb globe temperature (30.3 ± 2.5 °C) were measured on-site and used in the calculation of RHFs and indices of heat strain. As the results show, the RSR model overestimated the C res for Taiwanese workers by approximately 3 % and underestimated the E res by 8 %. The C res approximation in the PHS model closely predicted the convective RHF, while the E res approximation over-predicted by 11 %. Linear regressions provided better fit in C res approximation ( R 2 = 0.96) than in E res approximation ( R 2 ≤ 0.85) in both models. The predicted C res deviated increasingly from the observed value when the WBGT reached 35 °C. The deviations of RHFs observed for the workers from those predicted using the RSR or PHS models did not significantly alter the heat loss via the skin, as the RHFs were in general of a level less than 5 % of the metabolic heat consumption. Validation of these approximations considering thermo-physiological responses of local workers is necessary for application in scenarios of significant heat exposure.

  7. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    Energy Technology Data Exchange (ETDEWEB)

    Fan, J; Fan, J; Hu, W; Wang, J [Fudan University Shanghai Cancer Center, Shanghai, Shanghai (China)

    2016-06-15

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditional probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.

  8. Multiple Weighted Estimates for Vector-Valued Multilinear Singular Integrals with Non-Smooth Kernels and Its Commutators

    Directory of Open Access Journals (Sweden)

    Dongxiang Chen

    2013-01-01

    commutator and iterated commutator generated by the vector-valued multilinear operator and BMO functions. By the weighted estimates for a class of new variant maximal and sharp maximal functions, the multiple weighted norm inequalities for such operators are obtained.

  9. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  10. The Mondrian Kernel

    OpenAIRE

    Balog, Matej; Lakshminarayanan, B.; Ghahramani, Zoubin; Roy, DM; Teh, YW

    2016-01-01

    We introduce the Mondrian kernel, a fast $\\textit{random feature}$ approximation to the Laplace kernel. It is suitable for both batch and online learning, and admits a fast kernel-width-selection procedure as the random features can be re-used efficiently for all kernel widths. The features are constructed by sampling trees via a Mondrian process [Roy and Teh, 2009], and we highlight the connection to Mondrian forests [Lakshminarayanan et al., 2014], where trees are also sampled via a Mondria...

  11. Analysis and Planning of Ecological Networks Based on Kernel Density Estimations for the Beijing-Tianjin-Hebei Region in Northern China

    Directory of Open Access Journals (Sweden)

    Pengshan Li

    2016-10-01

    Full Text Available With the continued social and economic development of northern China, landscape fragmentation has placed increasing pressure on the ecological system of the Beijing-Tianjin-Hebei (BTH region. To maintain the integrity of ecological processes under the influence of human activities, we must maintain effective connections between habitats and limit the impact of ecological isolation. In this paper, landscape elements were identified based on a kernel density estimation, including forests, grasslands, orchards and wetlands. The spatial configuration of ecological networks was analysed by the integrated density index, and a natural breaks classification was performed for the landscape type data and the results of the landscape spatial distribution analysis. The results showed that forest and grassland are the primary constituents of the core areas and act as buffer zones for the region’s ecological network. Rivers, as linear patches, and orchards, as stepping stones, form the main body of the ecological corridors, and isolated elements are distributed mainly in the plain area. Orchards have transition effects. Wetlands act as connections between different landscapes in the region. Based on these results, we make suggestions for the protection and planning of ecological networks. This study can also provide guidance for the coordinated development of the BTH region.

  12. Online Capacity Estimation of Lithium-Ion Batteries Based on Novel Feature Extraction and Adaptive Multi-Kernel Relevance Vector Machine

    National Research Council Canada - National Science Library

    Yang Zhang; Bo Guo

    2015-01-01

    .... An adaptive multi-kernel relevance machine (MKRVM) based on accelerated particle swarm optimization algorithm is used to determine the optimal parameters of MKRVM and characterize the relationship between extracted features and battery capacity...

  13. Semisupervised kernel matrix learning by kernel propagation.

    Science.gov (United States)

    Hu, Enliang; Chen, Songcan; Zhang, Daoqiang; Yin, Xuesong

    2010-11-01

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

  14. THE ESTIMATION OF EFFICIENCY OF THE LADLES HEATING PROCESS

    OpenAIRE

    Wnęk, Mariusz; Rozpondek, Maciej

    2016-01-01

    The paper presents a system of drying and heating the metallurgical ladles. The ladle heating parameters significantly affect the metallurgical processes. The heating process target of the ceramic ladle lining can reduce the steel temperature in the furnace. It resulted in reduction of energy consumption what is an economic benefit. Adopted drying and heating rate of the ladle depends on the ladle refractory lining - an alkaline or an aluminosilicate. The temperature field uniformity of ceram...

  15. Improved estimates of ocean heat content from 1960 to 2015.

    Science.gov (United States)

    Cheng, Lijing; Trenberth, Kevin E; Fasullo, John; Boyer, Tim; Abraham, John; Zhu, Jiang

    2017-03-01

    Earth's energy imbalance (EEI) drives the ongoing global warming and can best be assessed across the historical record (that is, since 1960) from ocean heat content (OHC) changes. An accurate assessment of OHC is a challenge, mainly because of insufficient and irregular data coverage. We provide updated OHC estimates with the goal of minimizing associated sampling error. We performed a subsample test, in which subsets of data during the data-rich Argo era are colocated with locations of earlier ocean observations, to quantify this error. Our results provide a new OHC estimate with an unbiased mean sampling error and with variability on decadal and multidecadal time scales (signal) that can be reliably distinguished from sampling error (noise) with signal-to-noise ratios higher than 3. The inferred integrated EEI is greater than that reported in previous assessments and is consistent with a reconstruction of the radiative imbalance at the top of atmosphere starting in 1985. We found that changes in OHC are relatively small before about 1980; since then, OHC has increased fairly steadily and, since 1990, has increasingly involved deeper layers of the ocean. In addition, OHC changes in six major oceans are reliable on decadal time scales. All ocean basins examined have experienced significant warming since 1998, with the greatest warming in the southern oceans, the tropical/subtropical Pacific Ocean, and the tropical/subtropical Atlantic Ocean. This new look at OHC and EEI changes over time provides greater confidence than previously possible, and the data sets produced are a valuable resource for further study.

  16. Estimation of transient heat flux density during the heat supply of a catalytic wall steam methane reformer

    Science.gov (United States)

    Settar, Abdelhakim; Abboudi, Saïd; Madani, Brahim; Nebbali, Rachid

    2017-08-01

    Due to the endothermic nature of the steam methane reforming reaction, the process is often limited by the heat transfer behavior in the reactors. Poor thermal behavior sometimes leads to slow reaction kinetics, which is characterized by the presence of cold spots in the catalytic zones. Within this framework, the present work consists on a numerical investigation, in conjunction with an experimental one, on the one-dimensional heat transfer phenomenon during the heat supply of a catalytic-wall reactor, which is designed for hydrogen production. The studied reactor is inserted in an electric furnace where the heat requirement of the endothermic reaction is supplied by electric heating system. During the heat supply, an unknown heat flux density, received by the reactive flow, is estimated using inverse methods. In the basis of the catalytic-wall reactor model, an experimental setup is engineered in situ to measure the temperature distribution. Then after, the measurements are injected in the numerical heat flux estimation procedure, which is based on the Function Specification Method (FSM). The measured and estimated temperatures are confronted and the heat flux density which crosses the reactor wall is determined.

  17. Estimation of transient heat flux density during the heat supply of a catalytic wall steam methane reformer

    Science.gov (United States)

    Settar, Abdelhakim; Abboudi, Saïd; Madani, Brahim; Nebbali, Rachid

    2018-02-01

    Due to the endothermic nature of the steam methane reforming reaction, the process is often limited by the heat transfer behavior in the reactors. Poor thermal behavior sometimes leads to slow reaction kinetics, which is characterized by the presence of cold spots in the catalytic zones. Within this framework, the present work consists on a numerical investigation, in conjunction with an experimental one, on the one-dimensional heat transfer phenomenon during the heat supply of a catalytic-wall reactor, which is designed for hydrogen production. The studied reactor is inserted in an electric furnace where the heat requirement of the endothermic reaction is supplied by electric heating system. During the heat supply, an unknown heat flux density, received by the reactive flow, is estimated using inverse methods. In the basis of the catalytic-wall reactor model, an experimental setup is engineered in situ to measure the temperature distribution. Then after, the measurements are injected in the numerical heat flux estimation procedure, which is based on the Function Specification Method (FSM). The measured and estimated temperatures are confronted and the heat flux density which crosses the reactor wall is determined.

  18. Combining Satellite Microwave Radiometer and Radar Observations to Estimate Atmospheric Latent Heating Profiles

    Science.gov (United States)

    Grecu, Mircea; Olson, William S.; Shie, Chung-Lin; L'Ecuyer, Tristan S.; Tao, Wei-Kuo

    2009-01-01

    In this study, satellite passive microwave sensor observations from the TRMM Microwave Imager (TMI) are utilized to make estimates of latent + eddy sensible heating rates (Q1-QR) in regions of precipitation. The TMI heating algorithm (TRAIN) is calibrated, or "trained" using relatively accurate estimates of heating based upon spaceborne Precipitation Radar (PR) observations collocated with the TMI observations over a one-month period. The heating estimation technique is based upon a previously described Bayesian methodology, but with improvements in supporting cloud-resolving model simulations, an adjustment of precipitation echo tops to compensate for model biases, and a separate scaling of convective and stratiform heating components that leads to an approximate balance between estimated vertically-integrated condensation and surface precipitation. Estimates of Q1-QR from TMI compare favorably with the PR training estimates and show only modest sensitivity to the cloud-resolving model simulations of heating used to construct the training data. Moreover, the net condensation in the corresponding annual mean satellite latent heating profile is within a few percent of the annual mean surface precipitation rate over the tropical and subtropical oceans where the algorithm is applied. Comparisons of Q1 produced by combining TMI Q1-QR with independently derived estimates of QR show reasonable agreement with rawinsonde-based analyses of Q1 from two field campaigns, although the satellite estimates exhibit heating profile structure with sharper and more intense heating peaks than the rawinsonde estimates. 2

  19. Estimation of surface heat flux for ablation and charring of thermal protection material

    Science.gov (United States)

    Qian, Wei-qi; He, Kai-feng; Zhou, Yu

    2016-07-01

    Ablation of the thermal protection material of the reentry hypersonic flight vehicle is a complex physical and chemical process. To estimate the surface heat flux from internal temperature measurement is much more complex than the conventional inverse heat conduction problem case. In the paper, by utilizing a two-layer pyrogeneration-plane ablation model to model the ablation and charring of the material, modifying the finite control volume method to suit for the numerical simulation of the heat conduction equation with variable-geometry, the CGM along with the associated adjoint problem is developed to estimate the surface heat flux. This estimation method is verified with a numerical example at first, the results show that the estimation method is feasible and robust. The larger is the measurement noise, the greater is the deviation of the estimated result from the exact value, and the measurement noise of ablated surface position has a significant and more direct influence on the estimated result of surface heat flux. Furthermore, the estimation method is used to analyze the experimental data of ablation of blunt Carbon-phenolic material Narmco4028 in an arc-heater. It is shown that the estimated surface heat flux agrees with the heating power value of the arc-heater, and the estimation method is basically effective and potential to treat the engineering heat conduction problem with ablation.

  20. Regional heat flux over the NOPEX area estimated from the evolution of the mixed-layer

    DEFF Research Database (Denmark)

    Gryning, Sven-Erik; Batchvarova, E.

    1999-01-01

    of forest, agricultural fields, mires and lakes within the boreal zone, was determined for 3 days of the campaign in 1994. It was found to be lower than the heat flux over forest and higher than the heat Aux over agricultural fields. The regional heat flux estimated by the mixed-layer evolution method...

  1. Estimation procedure of the efficiency of the heat network segment

    Science.gov (United States)

    Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.

    2017-07-01

    An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in

  2. PERBANDINGAN PEMULUSAN KERNEL DAN SPLINE

    OpenAIRE

    Erfiani Erfiani; Aji Hamim Wigena; Aunuddin Aunuddin

    2014-01-01

    Pendugaan kepekatan kernel dan spline termasuk pendugaan kepekatan nonparametrik. Perilaku pemulus spline terletak dipertengahan antara pemulus kernel yang konstan dan pemulus kernel yang tidak konstan. Pada kasus n besar dan λ tertentu fungsi pembobot spline dapat didekati oleh fungsi kernel. Perbandingan pemulus spline dan kernel secara empirik dilakukan dengan menggunakan data simulasi yang dicobakan pada berbagai lebar jendela kernel serta fungsi spline pada berbagai jumlah kn...

  3. Cooling Load Estimation in the Building Based On Heat Sources

    Science.gov (United States)

    Chairani; Sulistyo, S.; Widyawan

    2017-05-01

    Heating, ventilation and air conditioning (HVAC) is the largest source of energy consumption. In this research, we discuss cooling load in the room by considering the different heat source and the number of occupancy. Energy cooling load is affected by external and internal heat sources. External cooling load in this discussion include convection outdoor/exterior using the DOE-2 algorithm, calculation of heat using Thermal Analysis Research Program (TARP), and Conduction Transfer Function (CTF). The internal cooling load is calculated based on the activity of the occupants in the office, a number of occupants, heat gain from lighting, and heat gain from electrics equipment. Weather data used is Surakarta weather and design day used is Jakarta design day. We use the ASHRAE standard for building materials and the metabolic of occupants while on the activity. The results show that the number of occupancies have an influence of cooling load. A large number of occupancy will cause the cooling load is great as well.

  4. Parabolic sublinear operators with rough kernel generated by parabolic calderön-zygmund operators and parabolic local campanato space estimates for their commutators on the parabolic generalized local morrey spaces

    Directory of Open Access Journals (Sweden)

    Gurbuz Ferit

    2016-01-01

    Full Text Available In this paper, the author introduces parabolic generalized local Morrey spaces and gets the boundedness of a large class of parabolic rough operators on them. The author also establishes the parabolic local Campanato space estimates for their commutators on parabolic generalized local Morrey spaces. As its special cases, the corresponding results of parabolic sublinear operators with rough kernel and their commutators can be deduced, respectively. At last, parabolic Marcinkiewicz operator which satisfies the conditions of these theorems can be considered as an example.

  5. Anthropogenic Heat Flux Estimation from Space: Results of the second phase of the URBANFLUXES Project

    Science.gov (United States)

    Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean-Philippe; Grimmond, Sue; Feigenwinter, Christian; Lindberg, Fredrik; Del Frate, Fabio; Klostermann, Judith; Mitraka, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans

    2017-04-01

    The H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the impacts of UEB fluxes on urban heat island and consequently on energy consumption in cities. In URBANFLUXES, the anthropogenic heat flux is estimated as a residual of UEB. Therefore, the rest UEB components, namely, the net all-wave radiation, the net change in heat storage and the turbulent sensible and latent heat fluxes are independently estimated from Earth Observation (EO), whereas the advection term is included in the error of the anthropogenic heat flux estimation from the UEB closure. The Discrete Anisotropic Radiative Transfer (DART) model is employed to improve the estimation of the net all-wave radiation balance, whereas the Element Surface Temperature Method (ESTM), adjusted to satellite observations is used to improve the estimation the estimation of the net change in heat storage. Furthermore the estimation of the turbulent sensible and latent heat fluxes is based on the Aerodynamic Resistance Method (ARM). Based on these outcomes, QF is estimated by regressing the sum of the turbulent heat fluxes versus the available energy. In-situ flux measurements are used to evaluate URBANFLUXES outcomes, whereas uncertainties are specified and analyzed. URBANFLUXES is expected to prepare the ground for further innovative exploitation of EO in scientific activities (climate variability studies at local and regional scales) and future and emerging applications (sustainable urban planning, mitigation technologies) to benefit climate change mitigation/adaptation. This study presents the results of the second phase of the project and detailed information on URBANFLUXES is available at: http://urbanfluxes.eu

  6. A Revised Estimate of Earth's Surface Heat Flux: 47TW ± 2TW

    Science.gov (United States)

    Davies, J.; Davies, R.

    2011-12-01

    Earth's surface heat flux provides a fundamental constraint on solid Earth dynamics. However, deriving an estimate of the total surface heat flux is complex, due to the inhomogeneous distribution of heat flow measurements and difficulties in measuring heat flux in young oceanic crust, arising due to hydrothermal circulation. We derive a revised estimate of Earth's surface heat flux using a database of 38347 measurements (provided by G. Laske and G. Masters), representing a 55% increase on the number of measurements used previously, and the methods of Geographical Information Science (GIS) (Davies & Davies, 2010). To account for hydrothermal circulation in young oceanic crust, we use a model estimate of the heat flux, following the work of Jaupart et al., 2007; while for the rest of the globe, in an attempt to overcome the inhomogeneous distribution of measurements, we develop an average for different geological units. Two digital geology data sets are used to define the global geology: (i) continental geology - Hearn et al., 2003; and (ii) the global data-set of CCGM - Commission de la Carte Géologique du Monde, 2000. This leads to > 93,000 polygons defining Earth's geology. To limit the influence of clustering, we intersect the geology polygons with a 1 by 1 degree (at the equator) equal area grid. For each geology class the average heat flow in the resulting polygons is evaluated. The contribution of that geology class to the global surface heat flow is derived by multiplying the estimated surface heat flux with the area of that geology class. The total surface heat flow contributions of all the geology classes are summed. For Antarctica we use an estimate based on depth to Curie temperature and include a 1TW contribution from hot-spots in young ocean age. Geology classes with less than 50 readings are excluded. The raw data suggests that this method of correlating heat flux with geology has some power. Our revised estimate for Earth's global surface heat flux

  7. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  8. Estimation of bulk transfer coefficient for latent heat flux (Ce)

    Digital Repository Service at National Institute of Oceanography (India)

    Sadhuram, Y.

    Coefficients in Diabatic Conditions’, Bow~dary-Layer Meteo- rol. 8, 465-474. Murakami. T., Nakazowa. T., and He, T.: 1984, ‘On the 40-50 Day Oscillations During the Monsoon During the Northern Hemisphere Summer. Part II: Heat and Moisture Budget’, J. Mereoi...

  9. Inverse problem of estimating transient heat transfer rate on external wall of forced convection pipe

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Wen-Lih; Yang, Yu-Ching; Chang, Win-Jin; Lee, Haw-Long [Clean Energy Center, Department of Mechanical Engineering, Kun Shan University, Yung-Kang City, Tainan 710-03 (China)

    2008-08-15

    In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown space and time dependent heat transfer rate on the external wall of a pipe system using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown heat transfer rate; hence, the procedure is classified as function estimation in the inverse calculation. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation of the space and time dependent heat transfer rate can be obtained for the test case considered in this study. (author)

  10. Simple future weather files for estimating heating and cooling demand

    DEFF Research Database (Denmark)

    Cox, Rimante Andrasiunaite; Drews, Martin; Rode, Carsten

    2015-01-01

    Estimations of the future energy consumption of buildings are becoming increasingly important as a basis for energy management, energy renovation, investment planning, and for determining the feasibility of technologies and designs. Future weather scenarios, where the outdoor climate is usually...... represented by future weather files, are needed for estimating the future energy consumption. In many cases, however, the practitioner’s ability to conveniently provide an estimate of the future energy consumption is hindered by the lack of easily available future weather files. This is, in part, due...... to the difficulties associated with generating high temporal resolution (hourly) estimates of future changes in air temperature. To address this issue, we investigate if, in the absence of high-resolution data, a weather file constructed from a coarse (annual) estimate of future air temperature change can provide...

  11. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  12. Mixture Density Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian mixture...

  13. Estimation of Surface Temperature and Heat Flux by Inverse Heat Transfer Methods Using Internal Temperatures Measured While Radiantly Heating a Carbon/Carbon Specimen up to 1920 F

    Science.gov (United States)

    Pizzo, Michelle; Daryabeigi, Kamran; Glass, David

    2015-01-01

    The ability to solve the heat conduction equation is needed when designing materials to be used on vehicles exposed to extremely high temperatures; e.g. vehicles used for atmospheric entry or hypersonic flight. When using test and flight data, computational methods such as finite difference schemes may be used to solve for both the direct heat conduction problem, i.e., solving between internal temperature measurements, and the inverse heat conduction problem, i.e., using the direct solution to march forward in space to the surface of the material to estimate both surface temperature and heat flux. The completed research first discusses the methods used in developing a computational code to solve both the direct and inverse heat transfer problems using one dimensional, centered, implicit finite volume schemes and one dimensional, centered, explicit space marching techniques. The developed code assumed the boundary conditions to be specified time varying temperatures and also considered temperature dependent thermal properties. The completed research then discusses the results of analyzing temperature data measured while radiantly heating a carbon/carbon specimen up to 1920 F. The temperature was measured using thermocouple (TC) plugs (small carbon/carbon material specimens) with four embedded TC plugs inserted into the larger carbon/carbon specimen. The purpose of analyzing the test data was to estimate the surface heat flux and temperature values from the internal temperature measurements using direct and inverse heat transfer methods, thus aiding in the thermal and structural design and analysis of high temperature vehicles.

  14. Estimation of surface heat flux and surface temperature during inverse heat conduction under varying spray parameters and sample initial temperature.

    Science.gov (United States)

    Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong; Zubair, Muhammad

    2014-01-01

    An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m(2) was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa.

  15. Estimation of Surface Heat Flux and Surface Temperature during Inverse Heat Conduction under Varying Spray Parameters and Sample Initial Temperature

    Directory of Open Access Journals (Sweden)

    Muhammad Aamir

    2014-01-01

    Full Text Available An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck’s sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m2 was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa.

  16. Estimation of Surface Heat Flux and Surface Temperature during Inverse Heat Conduction under Varying Spray Parameters and Sample Initial Temperature

    Science.gov (United States)

    Aamir, Muhammad; Liao, Qiang; Zhu, Xun; Aqeel-ur-Rehman; Wang, Hong

    2014-01-01

    An experimental study was carried out to investigate the effects of inlet pressure, sample thickness, initial sample temperature, and temperature sensor location on the surface heat flux, surface temperature, and surface ultrafast cooling rate using stainless steel samples of diameter 27 mm and thickness (mm) 8.5, 13, 17.5, and 22, respectively. Inlet pressure was varied from 0.2 MPa to 1.8 MPa, while sample initial temperature varied from 600°C to 900°C. Beck's sequential function specification method was utilized to estimate surface heat flux and surface temperature. Inlet pressure has a positive effect on surface heat flux (SHF) within a critical value of pressure. Thickness of the sample affects the maximum achieved SHF negatively. Surface heat flux as high as 0.4024 MW/m2 was estimated for a thickness of 8.5 mm. Insulation effects of vapor film become apparent in the sample initial temperature range of 900°C causing reduction in surface heat flux and cooling rate of the sample. A sensor location near to quenched surface is found to be a better choice to visualize the effects of spray parameters on surface heat flux and surface temperature. Cooling rate showed a profound increase for an inlet pressure of 0.8 MPa. PMID:24977219

  17. Kernelized Bayesian Matrix Factorization.

    Science.gov (United States)

    Gönen, Mehmet; Kaski, Samuel

    2014-10-01

    We extend kernelized matrix factorization with a full-Bayesian treatment and with an ability to work with multiple side information sources expressed as different kernels. Kernels have been introduced to integrate side information about the rows and columns, which is necessary for making out-of-matrix predictions. We discuss specifically binary output matrices but extensions to realvalued matrices are straightforward. We extend the state of the art in two key aspects: (i) A full-conjugate probabilistic formulation of the kernelized matrix factorization enables an efficient variational approximation, whereas full-Bayesian treatments are not computationally feasible in the earlier approaches. (ii) Multiple side information sources are included, treated as different kernels in multiple kernel learning which additionally reveals which side sources are informative. We then show that the framework can also be used for supervised and semi-supervised multilabel classification and multi-output regression, by considering samples and outputs as the domains where matrix factorization operates. Our method outperforms alternatives in predicting drug-protein interactions on two data sets. On multilabel classification, our algorithm obtains the lowest Hamming losses on 10 out of 14 data sets compared to five state-of-the-art multilabel classification algorithms. We finally show that the proposed approach outperforms alternatives in multi-output regression experiments on a yeast cell cycle data set.

  18. An inverse method to estimate stem surface heat flux in wildland fires

    Science.gov (United States)

    Anthony S. Bova; Matthew B. Dickinson

    2009-01-01

    Models of wildland fire-induced stem heating and tissue necrosis require accurate estimates of inward heat flux at the bark surface. Thermocouple probes or heat flux sensors placed at a stem surface do not mimic the thermal response of tree bark to flames.We show that data from thin thermocouple probes inserted just below the bark can be used, by means of a one-...

  19. Estimating Heat and Mass Transfer Processes in Green Roof Systems: Current Modeling Capabilities and Limitations (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Tabares Velasco, P. C.

    2011-04-01

    This presentation discusses estimating heat and mass transfer processes in green roof systems: current modeling capabilities and limitations. Green roofs are 'specialized roofing systems that support vegetation growth on rooftops.'

  20. Geothermal Heat Flux Underneath Ice Sheets Estimated From Magnetic Satellite Data

    DEFF Research Database (Denmark)

    Fox Maule, Cathrine; Purucker, M.E.; Olsen, Nils

    The geothermal heat flux is an important factor in the dynamics of ice sheets, and it is one of the important parameters in the thermal budgets of subglacial lakes. We have used satellite magnetic data to estimate the geothermal heat flux underneath the ice sheets in Antarctica and Greenland...

  1. Estimation of shutdown heat generation rates in GHARR-1 due to ...

    African Journals Online (AJOL)

    Fission products decay power and residual fission power generated after shutdown of Ghana Research Reactor-1 (GHARR-1) by reactivity insertion accident were estimated by solution of the decay and residual heat equations. A Matlab program code was developed to simulate the heat generation rates by fission product ...

  2. Sensible heat balance estimates of transient soil ice contents for freezing and thawing conditions

    Science.gov (United States)

    Soil ice content is an important component for winter soil hydrology. The sensible heat balance (SHB) method using measurements from heat pulse probes (HPP) is a possible way to determine transient soil ice content. In a previous study, in situ soil ice contents estimates with the SHB method were in...

  3. Methodology for estimation of time-dependent surface heat flux due to cryogen spray cooling.

    Science.gov (United States)

    Tunnell, James W; Torres, Jorge H; Anvari, Bahman

    2002-01-01

    Cryogen spray cooling (CSC) is an effective technique to protect the epidermis during cutaneous laser therapies. Spraying a cryogen onto the skin surface creates a time-varying heat flux, effectively cooling the skin during and following the cryogen spurt. In previous studies mathematical models were developed to predict the human skin temperature profiles during the cryogen spraying time. However, no studies have accounted for the additional cooling due to residual cryogen left on the skin surface following the spurt termination. We formulate and solve an inverse heat conduction (IHC) problem to predict the time-varying surface heat flux both during and following a cryogen spurt. The IHC formulation uses measured temperature profiles from within a medium to estimate the surface heat flux. We implement a one-dimensional sequential function specification method (SFSM) to estimate the surface heat flux from internal temperatures measured within an in vitro model in response to a cryogen spurt. Solution accuracy and experimental errors are examined using simulated temperature data. Heat flux following spurt termination appears substantial; however, it is less than that during the spraying time. The estimated time-varying heat flux can subsequently be used in forward heat conduction models to estimate temperature profiles in skin during and following a cryogen spurt and predict appropriate timing for onset of the laser pulse.

  4. Estimation of boundary heat flux using experimental temperature data in turbulent forced convection flow

    Science.gov (United States)

    Parwani, Ajit K.; Talukdar, Prabal; Subbarao, P. M. V.

    2015-03-01

    Heat flux at the boundary of a duct is estimated using the inverse technique based on conjugate gradient method (CGM) with an adjoint equation. A two-dimensional inverse forced convection hydrodynamically fully developed turbulent flow is considered. The simulations are performed with temperature data measured in the experimental test performed on a wind tunnel. The results show that the present numerical model with CGM is robust and accurate enough to estimate the strength and position of boundary heat flux.

  5. Geothermal hydrothermal direct heat use: US market size and market penetration estimates

    Energy Technology Data Exchange (ETDEWEB)

    El Sawy, A.H.; Entingh, D.J.

    1980-09-01

    This study estimates the future regional and national market penetration path of hydrothermal geothermal direct heat applications in the United States. A Technology Substitution Model (MARPEN) is developed and used to estimate the energy market shares captured by low-temperature (50 to 150/sup 0/C) hydrothermal geothermal energy systems over the period 1985 to 2020. The sensitivity of hydrothermal direct heat market shares to various government hydrothermal commercialization policies is examined. Several substantive recommendations to help accelerate commercialization of geothermal direct heat utilization in the United States are indicated and possible additional analyses are discussed.

  6. Estimation of pressure drop in gasket plate heat exchangers

    Directory of Open Access Journals (Sweden)

    Neagu Anisoara Arleziana

    2016-06-01

    Full Text Available In this paper, we present comparatively different methods of pressure drop calculation in the gasket plate heat exchangers (PHEs, using correlations recommended in literature on industrial data collected from a vegetable oil refinery. The goal of this study was to compare the results obtained with these correlations, in order to choose one or two for practical purpose of pumping power calculations. We concluded that pressure drop values calculated with Mulley relationship and Buonopane & Troupe correlation were close and also Bond’s equation gave results pretty close to these but the pressure drop is slightly underestimated. Kumar correlation gave results far from all the others and its application will lead to oversize. In conclusion, for further calculations we will chose either the Mulley relationship or the Buonopane & Troupe correlation.

  7. Global Intercomparison of 12 Land Surface Heat Flux Estimates

    Science.gov (United States)

    Jimenez, C.; Prigent, C.; Mueller, B.; Seneviratne, S. I.; McCabe, M. F.; Wood, E. F.; Rossow, W. B.; Balsamo, G.; Betts, A. K.; Dirmeyer, P. A.; hide

    2011-01-01

    A global intercomparison of 12 monthly mean land surface heat flux products for the period 1993-1995 is presented. The intercomparison includes some of the first emerging global satellite-based products (developed at Paris Observatory, Max Planck Institute for Biogeochemistry, University of California Berkeley, University of Maryland, and Princeton University) and examples of fluxes produced by reanalyses (ERA-Interim, MERRA, NCEP-DOE) and off-line land surface models (GSWP-2, GLDAS CLM/ Mosaic/Noah). An intercomparison of the global latent heat flux (Q(sub le)) annual means shows a spread of approx 20 W/sq m (all-product global average of approx 45 W/sq m). A similar spread is observed for the sensible (Q(sub h)) and net radiative (R(sub n)) fluxes. In general, the products correlate well with each other, helped by the large seasonal variability and common forcing data for some of the products. Expected spatial distributions related to the major climatic regimes and geographical features are reproduced by all products. Nevertheless, large Q(sub le)and Q(sub h) absolute differences are also observed. The fluxes were spatially averaged for 10 vegetation classes. The larger Q(sub le) differences were observed for the rain forest but, when normalized by mean fluxes, the differences were comparable to other classes. In general, the correlations between Q(sub le) and R(sub n) were higher for the satellite-based products compared with the reanalyses and off-line models. The fluxes were also averaged for 10 selected basins. The seasonality was generally well captured by all products, but large differences in the flux partitioning were observed for some products and basins.

  8. Estimating end-use emissions factors for policy analysis: the case of space cooling and heating.

    Science.gov (United States)

    Jacobsen, Grant D

    2014-06-17

    This paper provides the first estimates of end-use specific emissions factors, which are estimates of the amount of a pollutant that is emitted when a unit of electricity is generated to meet demand from a specific end-use. In particular, this paper provides estimates of emissions factors for space cooling and heating, which are two of the most significant end-uses. The analysis is based on a novel two-stage regression framework that estimates emissions factors that are specific to cooling or heating by exploiting variation in cooling and heating demand induced by weather variation. Heating is associated with similar or greater CO2 emissions factor than cooling in all regions. The difference is greatest in the Midwest and Northeast, where the estimated CO2 emissions factor for heating is more than 20% larger than the emissions factor for cooling. The minor differences in emissions factors in other regions, combined with the substantial difference in the demand pattern for cooling and heating, suggests that the use of overall regional emissions factors is reasonable for policy evaluations in certain locations. Accurately quantifying the emissions factors associated with different end-uses across regions will aid in designing improved energy and environmental policies.

  9. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  10. Estimate of Joule Heating in a Flat Dechirper

    Energy Technology Data Exchange (ETDEWEB)

    Bane, Karl [SLAC National Accelerator Lab., Menlo Park, CA (United States); Stupakov, Gennady [SLAC National Accelerator Lab., Menlo Park, CA (United States); Gjonaj, Erion [Technical Univ. of Darmstadt (Germany)

    2017-02-10

    We have performed Joule power loss calculations for a flat dechirper. We have considered the configurations of the beam on-axis between the two plates—for chirp control—and for the beam especially close to one plate—for use as a fast kicker. Our calculations use a surface impedance approach, one that is valid when corrugation parameters are small compared to aperture (the perturbative parameter regime). In our model we ignore effects of field reflections at the sides of the dechirper plates, and thus expect the results to underestimate the Joule losses. The analytical results were also tested by numerical, time-domain simulations. We find that most of the wake power lost by the beam is radiated out to the sides of the plates. For the case of the beam passing by a single plate, we derive an analytical expression for the broad-band impedance, and—in Appendix B—numerically confirm recently developed, analytical formulas for the short-range wakes. While our theory can be applied to the LCLS-II dechirper with large gaps, for the nominal apertures we are not in the perturbative regime and the reflection contribution to Joule losses is not negligible. With input from computer simulations, we estimate the Joule power loss (assuming bunch charge of 300 pC, repetition rate of 100 kHz) is 21 W/m for the case of two plates, and 24 W/m for the case of a single plate.

  11. Satellite air temperature estimation for monitoring the canopy layer heat island of Milan

    DEFF Research Database (Denmark)

    Pichierri, Manuele; Bonafoni, Stefania; Biondi, Riccardo

    2012-01-01

    In this work, satellite maps of the urban heat island of Milan are produced using satellite-based infrared sensor data. For this aim, we developed suitable algorithms employing satellite brightness temperatures for the direct air temperature estimation 2 m above the surface (canopy layer), showing...... 2007 and 2010 were processed. Analysis of the canopy layer heat island (CLHI) maps during summer months reveals an average heat island effect of 3–4K during nighttime (with some peaks around 5K) and a weak CLHI intensity during daytime. In addition, the satellite maps reveal a well defined island shape...

  12. Satellite data based approach for the estimation of anthropogenic heat flux over urban areas

    Science.gov (United States)

    Nitis, Theodoros; Tsegas, George; Moussiopoulos, Nicolas; Gounaridis, Dimitrios; Bliziotis, Dimitrios

    2017-09-01

    Anthropogenic effects in urban areas influence the thermal conditions in the environment and cause an increase of the atmospheric temperature. The cities are sources of heat and pollution, affecting the thermal structure of the atmosphere above them which results to the urban heat island effect. In order to analyze the urban heat island mechanism, it is important to estimate the anthropogenic heat flux which has a considerable impact on the urban energy budget. The anthropogenic heat flux is the result of man-made activities (i.e. traffic, industrial processes, heating/cooling) and thermal releases from the human body. Many studies have underlined the importance of the Anthropogenic Heat Flux to the calculation of the urban energy budget and subsequently, the estimation of mesoscale meteorological fields over urban areas. Therefore, spatially disaggregated anthropogenic heat flux data, at local and city scales, are of major importance for mesoscale meteorological models. The main objectives of the present work are to improve the quality of such data used as input for mesoscale meteorological models simulations and to enhance the application potential of GIS and remote sensing in the fields of climatology and meteorology. For this reason, the Urban Energy Budget concept is proposed as the foundation for an accurate determination of the anthropogenic heat discharge as a residual term in the surface energy balance. The methodology is applied to the cities of Athens and Paris using the Landsat ETM+ remote sensing data. The results will help to improve our knowledge on Anthropogenic Heat Flux, while the potential for further improvement of the methodology is also discussed.

  13. Estimation of the relationship between remotely sensed anthropogenic heat discharge and building energy use

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yuyu; Weng, Qihao; Gurney, Kevin R.; Shuai, Yanmin; Hu, Xuefei

    2012-01-01

    This paper examined the relationship between remotely sensed anthropogenic heat discharge and energy use from residential and commercial buildings across multiple scales in the city of Indianapolis, Indiana, USA. Anthropogenic heat discharge was estimated based on a remote sensing-based surface energy balance model, which was parameterized using land cover, land surface temperature, albedo, and meteorological data. Building energy use was estimated using a GIS-based building energy simulation model in conjunction with Department of Energy/ Energy Information Administration survey data, Assessor's parcel data, GIS floor areas data, and remote sensing-derived building height data.

  14. Estimation of the Heat Flow Variation in the Chad Basin Nigeria ...

    African Journals Online (AJOL)

    MICHAEL

    ABSTRACT: Wireline logs from 14 oil wells from the Nigerian sector of the Chad Basin were analyzed and interpreted to estimate the heat flow trend in the basin. Geothermal gradients were computed from corrected bottom hole temperatures while the bulk effective thermal conductivity for the different stratigraphic units ...

  15. Estimation of the Heat Flow Variation in the Chad Basin Nigeria ...

    African Journals Online (AJOL)

    Wireline logs from 14 oil wells from the Nigerian sector of the Chad Basin were analyzed and interpreted to estimate the heat flow trend in the basin. Geothermal gradients were computed from corrected bottom hole temperatures while the bulk effective thermal conductivity for the different stratigraphic units encountered in ...

  16. Estimation of sensible heat flux using the Surface Energy Balance System (SEBS) and ATSR measurements

    NARCIS (Netherlands)

    Jia, L.; Su, Z.; Hurk, van den B.; Menenti, M.; Moene, A.F.; Bruin, de H.A.R.; Baselga Yrisarry, J.J.; Ibanez, M.; Cuesta, A.

    2003-01-01

    This paper describes a modified version of the Surface Energy Balance System (SEBS) as regards the use of radiometric data from space and presents the results of a large area validation study on estimated sensible heat flux, extended over several months. The improvements were made possible by the

  17. Estimation of eddy diffusivity coefficient of heat in the upper layers of equatorial Arabian Sea

    Digital Repository Service at National Institute of Oceanography (India)

    Zavialov, P.O.; Murty, V.S.N.

    in the Central Equatorial Arabian Sea (CEAS). A comparison of the model computed K sub(h) values with those estimated from the heat balance of the upper layer (50 m) of the sea shows good agreement in the region of weak winds (CEAS) or low turbulent mixing regime...

  18. Comparison of heat flux estimations from two turbulent exchange models based on thermal UAV data.

    Science.gov (United States)

    Hoffmann, Helene; Nieto, Hector; Jensen, Rasmus; Friborg, Thomas

    2015-04-01

    Advantages of UAV (Unmanned Aerial Vehicle) data-collection, compared to more traditional data-collections are numerous and already well-discussed (Berni et al., 2009; Laliberte et al., 2011; Turner et al., 2012). However studies investigating the quality and applications of UAV-data are crucial if advantages are to be beneficial for scientific purposes. In this study, thermal data collected over an agricultural site in Denmark have been obtained using a fixed-wing UAV and investigated for the estimation of heat fluxes. Estimation of heat fluxes requires high precision data and careful data processing. Latent, sensible and soil heat fluxes are estimates through two models of the two source energy modelling scheme driven by remotely sensed observations of land surface temperature; the original TSEB (Norman et al., 1995) and the DTD (Norman et al., 2000) which builds on the TSEB. The DTD model accounts for errors arising when deriving radiometric temperatures and can to some extent compensate for the fact that thermal cameras rarely are accurate. The DTD model requires an additional set of remotely sensed data during morning hours of the day at which heat fluxes are to be determined. This makes the DTD model ideal to use when combined with UAV data, because acquisition of data is not limited by fixed time by-passing tracks like satellite images (Guzinski et al., 2013). Based on these data, heat fluxes are computed from the two models and compared with fluxes from an eddy covariance station situated within the same designated agricultural site. This over-all procedure potentially enables an assessment of both the collected thermal UAV-data and of the two turbulent exchange models. Results reveal that both TSEB and DTD models compute heat fluxes from thermal UAV data that is within a very reasonable range and also that estimates from the DTD model is in best agreement with the eddy covariance system.

  19. Numerical estimation of heat distribution from the implantable battery system of an undulation pump LVAD.

    Science.gov (United States)

    Okamoto, Eiji; Makino, Tsutomu; Nakamura, Masatoshi; Tanaka, Shuji; Chinzei, Tsuneo; Abe, Yusuke; Isoyama, Takashi; Saito, Itsuro; Mochizuki, Shu-ichi; Imachi, Kou; Inoue, Yusuke; Mitamura, Yoshinori

    2006-01-01

    We have been developing an implantable battery system using three series-connected lithium ion batteries having an energy capacity of 1,800 mAh to drive an undulation pump left ventricular assist device. However, the lithium ion battery undergoes an exothermic reaction during the discharge phase, and the temperature rise of the lithium ion battery is a critical issue for implantation usage. Heat generation in the lithium ion battery depends on the intensity of the discharge current, and we obtained a relationship between the heat flow from the lithium ion battery q(c)(I) and the intensity of the discharge current I as q(c)(I) = 0.63 x I (W) in in vitro experiments. The temperature distribution of the implantable battery system was estimated by means of three-dimentional finite-element method (FEM) heat transfer analysis using the heat flow function q(c)(I), and we also measured the temperature rise of the implantable battery system in in vitro experiments to conduct verification of the estimation. The maximum temperatures of the lithium ion battery and the implantable battery case were measured as 52.2 degrees C and 41.1 degrees C, respectively. The estimated result of temperature distribution of the implantable battery system agreed well with the measured results using thermography. In conclusion, FEM heat transfer analysis is promising as a tool to estimate the temperature of the implantable lithium ion battery system under any pump current without the need for animal experiments, and it is a convenient tool for optimization of heat transfer characteristics of the implantable battery system.

  20. Estimating land surface heat flux using radiometric surface temperature without the need for an extra resistance

    Science.gov (United States)

    Su, H.; Yang, Y.; Liu, S.

    2015-12-01

    Remotely-sensed land surface temperature (LST) is a key variable in energy balance and is widely used for estimating regional heat flux. However, the inequality between LST and aerodynamic surface temperature (Taero) poses a great challenge for regional heat flux estimation in one -source energy balance models. In this study, a one-source model for land (OSML) was proposed to estimate regional surface heat flux without a need for an empirical extra resistance. The proposed OSML employs both a conceptual VFC/LST trapezoid model and the electrical analogue formula of sensible heat flux (H) to estimate the radiometric-convective resistance (rae) by using a quartic equation. To evaluate the performance of OSML, the model was applied to the Soil Moisture-Atmosphere Coupling Experiment (SMACEX), using a remotely-sensed data set at a regional scale. Validated against tower observations, the root mean square deviation (RMSD) of H and latent heat flux (LE) from OSML was 47 W/m2 and 51 W/m2, which is comparable to other published studies. OSML and SEBS (Surface Energy Balance System) compared under the same available energy indicated that LE estimated by OSML is comparable to that derived from the SEBS model. In conducting further inter-comparisons of rae, the aerodynamic resistance derived from SEBS (ra_SEBS), and aerodynamic resistance (ra) derived from Brutsaert et al. (2005) in corn and soybean fields, we found that rae and ra_SEBS are comparable. Most importantly, our study indicates that the OSML method is applicable without having to acquire wind speed or to specify aerodynamic surface characteristics and that it is applicable to heterogeneous areas.

  1. Correcting anthropogenic ocean heat uptake estimates for the Little Ice Age

    Science.gov (United States)

    Gebbie, Geoffrey

    2017-04-01

    Estimates of anthropogenic ocean heat uptake typically assume that the ocean was in equilibrium during the pre-industrial era. Recent reconstructions of the Common Era, however, show a multi-century surface cooling trend before the Industrial Revolution. Using a time-evolving state estimation method, we find that the 1750 C.E. ocean must have been out of equilibrium in order to fit the H.M.S. Challenger, WOCE, and Argo hydrographic data. When the disequilibrated ocean conditions are taken into account, the inferred ocean heat uptake from 1750-2014 C.E. is revised due to the deep ocean memory of Little Ice Age surface forcing. These effects of ocean disequilibrium should also be considered when interpreting climate sensitivity estimates.

  2. Constraining the Global Ocean Heat Content Through Assimilation of CERES-Derived TOA Energy Imbalance Estimates

    Science.gov (United States)

    Storto, Andrea; Yang, Chunxue; Masina, Simona

    2017-10-01

    The Earth's energy imbalance (EEI) is stored in the oceans for the most part. Thus, estimates of its variability can be ingested in ocean retrospective analyses to constrain the global ocean heat budget. Here we propose a scheme to assimilate top of the atmosphere global radiation imbalance estimates from Clouds and the Earth's Radiant Energy System (CERES) in a coarse-resolution variational ocean reanalysis system (2000-2014). The methodology proves able to shape the heat content tendencies according to the EEI estimates, without compromising the reanalysis accuracy. Spurious variability and underestimation (overestimation) present in experiments with in situ (no) data assimilation disappear when EEI data are assimilated. The warming hiatus present without the assimilation of EEI data is mitigated, inducing ocean warming at depths below 1,500 m and slightly larger in the Southern Hemisphere, in accordance with recent studies. Furthermore, the methodology may be applied to Earth System reanalyses and climate simulations to realistically constrain the global energy budget.

  3. A new global anthropogenic heat estimation based on high-resolution nighttime light data.

    Science.gov (United States)

    Yang, Wangming; Luan, Yibo; Liu, Xiaolei; Yu, Xiaoyong; Miao, Lijuan; Cui, Xuefeng

    2017-08-22

    Consumption of fossil fuel resources leads to global warming and climate change. Apart from the negative impact of greenhouse gases on the climate, the increasing emission of anthropogenic heat from energy consumption also brings significant impacts on urban ecosystems and the surface energy balance. The objective of this work is to develop a new method of estimating the global anthropogenic heat budget and validate it on the global scale with a high precision and resolution dataset. A statistical algorithm was applied to estimate the annual mean anthropogenic heat (AH-DMSP) from 1992 to 2010 at 1×1 km(2) spatial resolution for the entire planet. AH-DMSP was validated for both provincial and city scales, and results indicate that our dataset performs well at both scales. Compared with other global anthropogenic heat datasets, the AH-DMSP has a higher precision and finer spatial distribution. Although there are some limitations, the AH-DMSP could provide reliable, multi-scale anthropogenic heat information, which could be used for further research on regional or global climate change and urban ecosystems.

  4. A new global anthropogenic heat estimation based on high-resolution nighttime light data

    Science.gov (United States)

    Yang, Wangming; Luan, Yibo; Liu, Xiaolei; Yu, Xiaoyong; Miao, Lijuan; Cui, Xuefeng

    2017-08-01

    Consumption of fossil fuel resources leads to global warming and climate change. Apart from the negative impact of greenhouse gases on the climate, the increasing emission of anthropogenic heat from energy consumption also brings significant impacts on urban ecosystems and the surface energy balance. The objective of this work is to develop a new method of estimating the global anthropogenic heat budget and validate it on the global scale with a high precision and resolution dataset. A statistical algorithm was applied to estimate the annual mean anthropogenic heat (AH-DMSP) from 1992 to 2010 at 1×1 km2 spatial resolution for the entire planet. AH-DMSP was validated for both provincial and city scales, and results indicate that our dataset performs well at both scales. Compared with other global anthropogenic heat datasets, the AH-DMSP has a higher precision and finer spatial distribution. Although there are some limitations, the AH-DMSP could provide reliable, multi-scale anthropogenic heat information, which could be used for further research on regional or global climate change and urban ecosystems.

  5. Online Capacity Estimation of Lithium-Ion Batteries Based on Novel Feature Extraction and Adaptive Multi-Kernel Relevance Vector Machine

    OpenAIRE

    Yang Zhang; Bo Guo

    2015-01-01

    Prognostics is necessary to ensure the reliability and safety of lithium-ion batteries for hybrid electric vehicles or satellites. This process can be achieved by capacity estimation, which is a direct fading indicator for assessing the state of health of a battery. However, the capacity of a lithium-ion battery onboard is difficult to monitor. This paper presents a data-driven approach for online capacity estimation. First, six novel features are extracted from cyclic charge/discharge cycles...

  6. Experimental and empirical technique to estimate energy decreasing at heating in an oval furnace

    Directory of Open Access Journals (Sweden)

    A. A. Minea

    2012-10-01

    Full Text Available In this paper an experimental and empirical methods are proposed to estimate the heat transfer enhancement in industrial heating processes in oval furnaces. An investigation was conducted to study the suitability of inserting radiant panels of different positions and radiation surface. Two case studies were considered. The maximum energy saving was obtained for case 5: 32,89 % off from the standard experiment (with no panels. The minimum energy saving was obtained for case 10: 11,72 % off from the standard experiment (with no panels. Finally, based on the results of this study, a correlation was developed to predict the inner configuration of an oval furnace.

  7. Reproducing Kernels and Variable Bandwidth

    Directory of Open Access Journals (Sweden)

    R. Aceska

    2012-01-01

    Full Text Available We show that a modulation space of type ( is a reproducing kernel Hilbert space (RKHS. In particular, we explore the special cases of variable bandwidth spaces Aceska and Feichtinger (2011 with a suitably chosen weight to provide strong enough decay in the frequency direction. The reproducing kernel property is valid even if ( does not coincide with any of the classical Sobolev spaces because unbounded bandwidth (globally is allowed. The reproducing kernel will be described explicitly.

  8. Estimating population heat exposure and impacts on working people in conjunction with climate change.

    Science.gov (United States)

    Kjellstrom, Tord; Freyberg, Chris; Lemke, Bruno; Otto, Matthias; Briggs, David

    2017-08-01

    Increased environmental heat levels as a result of climate change present a major challenge to the health, wellbeing and sustainability of human communities in already hot parts of this planet. This challenge has many facets from direct clinical health effects of daily heat exposure to indirect effects related to poor air quality, poor access to safe drinking water, poor access to nutritious and safe food and inadequate protection from disease vectors and environmental toxic chemicals. The increasing environmental heat is a threat to environmental sustainability. In addition, social conditions can be undermined by the negative effects of increased heat on daily work and life activities and on local cultural practices. The methodology we describe can be used to produce quantitative estimates of the impacts of climate change on work activities in countries and local communities. We show in maps the increasing heat exposures in the shade expressed as the occupational heat stress index Wet Bulb Globe Temperature. Some tropical and sub-tropical areas already experience serious heat stress, and the continuing heating will substantially reduce work capacity and labour productivity in widening parts of the world. Southern parts of Europe and the USA will also be affected. Even the lowest target for climate change (average global temperature change = 1.5 °C at representative concentration pathway (RCP2.6) will increase the loss of daylight work hour output due to heat in many tropical areas from less than 2% now up to more than 6% at the end of the century. A global temperature change of 2.7 °C (at RCP6.0) will double this annual heat impact on work in such areas. Calculations of this type of heat impact at country level show that in the USA, the loss of work capacity in moderate level work in the shade will increase from 0.17% now to more than 1.3% at the end of the century based on the 2.7 °C temperature change. The impact is naturally mainly occurring in the

  9. Estimating population heat exposure and impacts on working people in conjunction with climate change

    Science.gov (United States)

    Kjellstrom, Tord; Freyberg, Chris; Lemke, Bruno; Otto, Matthias; Briggs, David

    2017-08-01

    Increased environmental heat levels as a result of climate change present a major challenge to the health, wellbeing and sustainability of human communities in already hot parts of this planet. This challenge has many facets from direct clinical health effects of daily heat exposure to indirect effects related to poor air quality, poor access to safe drinking water, poor access to nutritious and safe food and inadequate protection from disease vectors and environmental toxic chemicals. The increasing environmental heat is a threat to environmental sustainability. In addition, social conditions can be undermined by the negative effects of increased heat on daily work and life activities and on local cultural practices. The methodology we describe can be used to produce quantitative estimates of the impacts of climate change on work activities in countries and local communities. We show in maps the increasing heat exposures in the shade expressed as the occupational heat stress index Wet Bulb Globe Temperature. Some tropical and sub-tropical areas already experience serious heat stress, and the continuing heating will substantially reduce work capacity and labour productivity in widening parts of the world. Southern parts of Europe and the USA will also be affected. Even the lowest target for climate change (average global temperature change = 1.5 °C at representative concentration pathway (RCP2.6) will increase the loss of daylight work hour output due to heat in many tropical areas from less than 2% now up to more than 6% at the end of the century. A global temperature change of 2.7 °C (at RCP6.0) will double this annual heat impact on work in such areas. Calculations of this type of heat impact at country level show that in the USA, the loss of work capacity in moderate level work in the shade will increase from 0.17% now to more than 1.3% at the end of the century based on the 2.7 °C temperature change. The impact is naturally mainly occurring in the southern

  10. Surface layer scintillometry for estimating the sensible heat flux component of the surface energy balance

    Directory of Open Access Journals (Sweden)

    M. J. Savage

    2010-01-01

    Full Text Available The relatively recently developed scintillometry method, with a focus on the dual-beam surface layer scintillometer (SLS, allows boundary layer atmospheric turbulence, surface sensible heat and momentum flux to be estimated in real-time. Much of the previous research using the scintillometer method has involved the large aperture scintillometer method, with only a few studies using the SLS method. The SLS method has been mainly used by agrometeorologists, hydrologists and micrometeorologists for atmospheric stability and surface energy balance studies to obtain estimates of sensible heat from which evaporation estimates representing areas of one hectare or larger are possible. Other applications include the use of the SLS method in obtaining crucial input parameters for atmospheric dispersion and turbulence models. The SLS method relies upon optical scintillation of a horizontal laser beam between transmitter and receiver for a separation distance typically between 50 and 250 m caused by refractive index inhomogeneities in the atmosphere that arise from turbulence fluctuations in air temperature and to a much lesser extent the fluctuations in water vapour pressure. Measurements of SLS beam transmission allow turbulence of the atmosphere to be determined, from which sub-hourly, real-time and in situ path-weighted fluxes of sensible heat and momentum may be calculated by application of the Monin-Obukhov similarity theory. Unlike the eddy covariance (EC method for which corrections for flow distortion and coordinate rotation are applied, no corrections to the SLS measurements, apart from a correction for water vapour pressure, are applied. Also, path-weighted SLS estimates over the propagation path are obtained. The SLS method also offers high temporal measurement resolution and usually greater spatial coverage compared to EC, Bowen ratio energy balance, surface renewal and other sensible heat measurement methods. Applying the shortened surface

  11. Dynamics of Soil Heat Flux in Lowland Area: Estimating the Soil Thermal Conductivy

    Science.gov (United States)

    Zimmer, T.; Silveira, M. V.; Roberti, D. R.

    2013-05-01

    In this work, it is shown soil thermal conductivity estimates in a flooded irrigated rice culture located at the Paraíso do Sul city for two distinct periods. The thermal conductivity is higher when the heat storage is higher and the soil surface temperature is lower. The soil thermal conductivity is also dependant on the soil texture, porosity and moisture. Therefore, it varies from soil to soil and in the same soil, depending on its soil moisture. For approximately 80% of its growing season, lowland flooded irrigated rice ecosystems stay under a 5 - 10 cm water layer. It affects the partitioning of the energy and water balance components. Furthermore this planting technique differs substantially from any other upland non-irrigated or irrigated crop ecosystems where the majority of observational studies have been conducted. In the present work, the dynamic of soil heat flux (G) is analyzed and the soil thermal conductivity (Ks) is estimated using experimental data form soil heat flux and soil temperature in a rice paddy farm in a subtropical location in Southern Brazil. In this region, rice grows once a year at river lowlands and wetlands while the ground is kept bare during the remaining of the year. The soil type is Planossolo Hidromórfico Distrófico, characterized as a mix between sandy and clay soil. The soil heat flux (G) was experimentally estimated with the sensor Hukseflux (HFP01SC-L) at 7 cm bellow the soil surface. The soil temperature at 5 cm and 10 cm was experimentally estimated using the sensor STP01. The experimental soil heat flux was compared with estimated soil heat flux by two forms: (1) using a know Ks from literature for this type of soil in saturated conditions (Ks=1.58); (2) using Ks estimated using the inversion of the equation Qg=-ks* ((T10-T5)/ (Z2-Z1)), where T10 and T5 are the temperature in 10 and 5 cm above the soil and Z2-Z1 is the difference between the positions in temperature measurement. The study period for estimating the Ks

  12. Explicit signal to noise ratio in reproducing kernel Hilbert spaces

    DEFF Research Database (Denmark)

    Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo

    2011-01-01

    an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...... with PCA, MNF, KPCA, and the previous version of KMNF. Extracted features with the explicit KMNF also improve hyperspectral image classification....

  13. Un estudio de caso en el análisis de la distribución de frecuencias de tallas de Litopenaeus vannamei (Boone, 1931 mediante el uso de estimadores de densidad por Kernel A case study of length frequency distribution analysis of Litopenaeus vannamei (Boone, 1931 using kernel density estimators

    Directory of Open Access Journals (Sweden)

    Gustavo Rivera-Velázquez

    2010-01-01

    Full Text Available Se presenta el uso del estimador de densidad por Kernel (EDK, como una herramienta moderna para analizar la distribución de frecuencia de tallas de Litopenaeus vannamei en su etapa estuarina. Los datos fueron obtenidos en 22 sitios de muestreo dentro del sistema lagunar-estuarino Carretas-Pereyra, a intervalos mensuales entre marzo 2004 y agosto 2005 abarcando las dos épocas del año; estío y precipitación. El camarón fue capturado con atarraya (esparavel de cuatro metros de diámetro y malla de 10 mm. La distribución de cada muestra fue analizada mediante EDKs, usando la función ponderal gaussiana y la amplitud de banda bootstrap. Los valores de las modas dominantes en el tiempo fueron ajustados a la función de crecimiento de von Bertalanffy. Los resultados sugieren un reclutamiento continuo al área de pesca pero con pulso bimodal. La tasa media de crecimiento del camarón fue igual en ambas estaciones climáticas. Los parámetros estimados son similares a los registrados en investigaciones previas por otros autores para la especie en sistemas cercanos. El estudio muestra como el uso de los EDKs seguidos por un método de análisis del crecimiento (en este caso se empleo el análisis de la progresión modal, es un camino objetivo y preciso para el estudio de la distribución de frecuencias de tallas y su aplicación en la estimación de parámetros importantes en la dinámica poblacional de especies de importancia pesquera.This paper, introduces the use of kernel density estimators (EDK's as a modern tool for examining the length-frequency distribution of Litopenaeus vannamei in its estuarine stage. The data were obtained monthly at 22 sampling sites in the Carretas-Pereyra lagoon-estuarine system, from March 2004 to August 2005, covering both seasons of the year: dry and rainy. Shrimp were caught with a 4 m diameter cast net of 10 mm mesh size. The size distribution of each sample was analyzed by means of EDK's, using the Gaussian

  14. Inverse natural convection problem of estimating wall heat flux using a moving sensor

    Energy Technology Data Exchange (ETDEWEB)

    Park, H.M.; Chung, O.Y.

    1999-11-01

    Inverse heat transfer problems have many applications in various branch of science and engineering. Here, the inverse problem of determining heat flux at the bottom wall of a two-dimensional cavity from temperature measurement in the domain is considered. The Boussinesq equation is used to model the natural convection induced by the wall heat flux. The inverse natural convection problem is posed as a minimization problem of the performance function, which is the sum of square residuals between calculated and observed temperature, by means of a conjugate gradient method. Instead of employing several fixed sensors, a single sensor is used which is moving at a given frequency over the bottom wall. The present method solves the inverse natural convection problem accurately without a priori information about the unknown function to be estimated.

  15. Metabolisable energy values of whole palm kernel and palm kernel ...

    African Journals Online (AJOL)

    A series of four experiments were conducted in which 30g DM of whole palm kernel (WPK) and of Palm Kernel Oil Sludge (PKOS) were force-fed to laying hens and adult broiler chickens. The length of the collection periods was the same (24, 30, 48 and 60hr) for both ingredients. The ingredients and their faecal materials ...

  16. Vertical heat flux in the ocean: Estimates from observations and from a coupled general circulation model

    Science.gov (United States)

    Cummins, Patrick F.; Masson, Diane; Saenko, Oleg A.

    2016-06-01

    The net heat uptake by the ocean in a changing climate involves small imbalances between the advective and diffusive processes that transport heat vertically. Generally, it is necessary to rely on global climate models to study these processes in detail. In the present study, it is shown that a key component of the vertical heat flux, namely that associated with the large-scale mean vertical circulation, can be diagnosed over extra-tropical regions from global observational data sets. This component is estimated based on the vertical velocity obtained from the geostrophic vorticity balance, combined with estimates of absolute geostrophic flow. Results are compared with the output of a non-eddy resolving, coupled atmosphere-ocean general circulation model. Reasonable agreement is found in the latitudinal distribution of the vertical heat flux, as well as in the area-integrated flux below about 250 m depth. The correspondence with the coupled model deteriorates sharply at depths shallower than 250 m due to the omission of equatorial regions from the calculation. The vertical heat flux due to the mean circulation is found to be dominated globally by the downward contribution from the Southern Hemisphere, in particular the Southern Ocean. This is driven by the Ekman vertical velocity which induces an upward transport of seawater that is cold relative to the horizontal average at a given depth. The results indicate that the dominant characteristics of the vertical transport of heat due to the mean circulation can be inferred from simple linear vorticity dynamics over much of the ocean.

  17. Reproducing kernel Hilbert spaces with odd kernels in price prediction.

    Science.gov (United States)

    Krejník, Miloš; Tyutin, Anton

    2012-10-01

    For time series of futures contract prices, the expected price change is modeled conditional on past price changes. The proposed model takes the form of regression in a reproducing kernel Hilbert space with the constraint that the regression function must be odd. It is shown how the resulting constrained optimization problem can be reduced to an unconstrained one through appropriate modification of the kernel. In particular, it is shown how odd, even, and other similar kernels emerge naturally as the reproducing kernels of Hilbert subspaces induced by respective symmetry constraints. To test the validity and practical usefulness of the oddness assumption, experiments are run with large real-world datasets on four futures contracts, and it is demonstrated that using odd kernels results in a higher predictive accuracy and a reduced tendency to overfit.

  18. On the optimal experimental design for heat and moisture parameter estimation

    CERN Document Server

    Berger, Julien; Mendes, Nathan

    2016-01-01

    In the context of estimating material properties of porous walls based on in-site measurements and identification method, this paper presents the concept of Optimal Experiment Design (OED). It aims at searching the best experimental conditions in terms of quantity and position of sensors and boundary conditions imposed to the material. These optimal conditions ensure to provide the maximum accuracy of the identification method and thus the estimated parameters. The search of the OED is done by using the Fisher information matrix and a priori knowledge of the parameters. The methodology is applied for two case studies. The first one deals with purely conductive heat transfer. The concept of optimal experiment design is detailed and verified with 100 inverse problems for different experiment designs. The second case study combines a strong coupling between heat and moisture transfer through a porous building material. The methodology presented is based on a scientific formalism for efficient planning of experim...

  19. An inverse radiation problem of estimating heat-transfer coefficient in participating media

    Energy Technology Data Exchange (ETDEWEB)

    Park, H.M.; Lee, W.J. [Sogang University, Seoul (Republic of Korea). Dept. of Chemical Engineering

    2002-06-01

    In the radiant cooler, where the hot gas from the pulverized coal gasifier or combustor is cooled to generate steam, the wall heat-transfer coefficient varies due to ash deposition. The authors investigated an inverse radiation problem of estimating the heat-transfer coefficient from temperature measurement in the radiant cooler. The inverse radiation problem is solved through the minimization of a performance function, which is expressed by the sum of square residuals between calculated and observed temperature, utilizing the conjugate gradient method. The gradient of the performance function is evaluated by means of the improved adjoint variable method, which resolves the difficulty associated with the singularity of the adjoint equation through its inherent regularization property. The effects of the number of measurement points and measurement noise on the accuracy of estimation are also investigated.

  20. The Diurnal Cycle of Diabatic Heating and TRMM Precipitation Estimates in West Africa

    Science.gov (United States)

    Davis, A. J.

    2012-12-01

    Numerous investigations have examined the diurnal cycle of convective activity in West Africa based exclusively on satellite observations. However, a unique opportunity exists to study this problem using combined in situ and satellite data thanks to the African Monsoon Multidisciplinary Analysis (AMMA)/NASA AMMA (NAMMA) field campaign that took place in 2006. In particular, a network of radiosonde launch sites was set up from June through September 2006, with the most intensive observations collected over parts of Niger, Nigeria, Benin, Togo, and Ghana. In the present study, composite vertical profiles of diabatic heating through the diurnal cycle are computed within this region of West Africa, based on the AMMA sounding data. Then, these heating profiles are placed within the context of precipitation estimates derived from several TRMM products for the same time period. In particular, the structures and timing of the heating profiles are compared with precipitation feature information provided by the TRMM database of the University of Utah Tropical Meteorology Group. This dataset includes precipitation features based on applying thresholds to data from several different instruments, including the PR, TMI, and VIRS. Differences in the composite diurnal timing of rainfall as detected by these various types of precipitation features are explored and compared with the signatures of convective and stratiform precipitation suggested by the observed diabatic heating. Alignment between the timing of the diabatic profiles and more-processed satellite products, such as 3B42 rain estimates, is also assessed.

  1. Estimating thermal diffusivity and specific heat from needle probe thermal conductivity data

    Science.gov (United States)

    Waite, W.F.; Gilbert, L.Y.; Winters, W.J.; Mason, D.H.

    2006-01-01

    Thermal diffusivity and specific heat can be estimated from thermal conductivity measurements made using a standard needle probe and a suitably high data acquisition rate. Thermal properties are calculated from the measured temperature change in a sample subjected to heating by a needle probe. Accurate thermal conductivity measurements are obtained from a linear fit to many tens or hundreds of temperature change data points. In contrast, thermal diffusivity calculations require a nonlinear fit to the measured temperature change occurring in the first few tenths of a second of the measurement, resulting in a lower accuracy than that obtained for thermal conductivity. Specific heat is calculated from the ratio of thermal conductivity to diffusivity, and thus can have an uncertainty no better than that of the diffusivity estimate. Our thermal conductivity measurements of ice Ih and of tetrahydrofuran (THF) hydrate, made using a 1.6 mm outer diameter needle probe and a data acquisition rate of 18.2 pointss, agree with published results. Our thermal diffusivity and specific heat results reproduce published results within 25% for ice Ih and 3% for THF hydrate. ?? 2006 American Institute of Physics.

  2. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan

    This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi......This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation...... of the predictive regression model is based on a shrinkage estimator to avoid overfitting. We extend the kernel ridge regression methodology to enable its use for economic time-series forecasting, by including lags of the dependent variable or other individual variables as predictors, as typically desired...... in macroeconomic and financial applications. Monte Carlo simulations as well as an empirical application to various key measures of real economic activity confirm that kernel ridge regression can produce more accurate forecasts than traditional linear and nonlinear methods for dealing with many predictors based...

  3. Using forecast and observed weather data to assess performance of forecast products in identifying heat waves and estimating heat wave effects on mortality.

    Science.gov (United States)

    Zhang, Kai; Chen, Yeh-Hsin; Schwartz, Joel D; Rood, Richard B; O'Neill, Marie S

    2014-09-01

    Heat wave and health warning systems are activated based on forecasts of health-threatening hot weather. We estimated heat-mortality associations based on forecast and observed weather data in Detroit, Michigan, and compared the accuracy of forecast products for predicting heat waves. We derived and compared apparent temperature (AT) and heat wave days (with heat waves defined as ≥ 2 days of daily mean AT ≥ 95th percentile of warm-season average) from weather observations and six different forecast products. We used Poisson regression with and without adjustment for ozone and/or PM10 (particulate matter with aerodynamic diameter ≤ 10 μm) to estimate and compare associations of daily all-cause mortality with observed and predicted AT and heat wave days. The 1-day-ahead forecast of a local operational product, Revised Digital Forecast, had about half the number of false positives compared with all other forecasts. On average, controlling for heat waves, days with observed AT = 25.3°C were associated with 3.5% higher mortality (95% CI: -1.6, 8.8%) than days with AT = 8.5°C. Observed heat wave days were associated with 6.2% higher mortality (95% CI: -0.4, 13.2%) than non-heat wave days. The accuracy of predictions varied, but associations between mortality and forecast heat generally tended to overestimate heat effects, whereas associations with forecast heat waves tended to underestimate heat wave effects, relative to associations based on observed weather metrics. Our findings suggest that incorporating knowledge of local conditions may improve the accuracy of predictions used to activate heat wave and health warning systems.

  4. Estimating regional distribution of surface heat fluxes by combining satellite data and a heat budget model over the Kherlen River Basin, Mongolia

    Science.gov (United States)

    Matsushima, Dai

    2007-01-01

    SummaryThe regional distribution of surface heat fluxes and related parameters over a semi-arid region was estimated using a technique that incorporates the thermal-infrared brightness temperature from a satellite into a heat budget model of land surface including vegetation canopy. We studied the western part of the Kherlen River Basin in Mongolia, where typical steppe dominates, including forest-steppe in the northern part and dry-steppe in the southern part of the basin. Our goal was to estimate the temporal change of surface heat fluxes at a location in the typical steppe over a growing season, and to estimate the spatial distribution of surface heat fluxes over the study area. Seven parameters, including the bulk transfer coefficients, the evaporation efficiency, and the subsurface thermal inertia, which are relevant to the surface heat fluxes, were optimized employing the simplex method. To compensate for insufficient satellite data samples to reproduce the diurnal change of surface heat fluxes, the spatial distribution of the surface brightness temperature was used in the optimization rather than using diurnal change, which is referred to as spatial optimization. Diurnal changes in the surface heat fluxes estimated by spatial optimization were validated by observation. The surface heat fluxes were reasonably accurately reproduced on a daily basis, with the root-mean-squares error of the sensible and the latent heat within 15 W m -2 over the growing season. The evaporation efficiency of canopy and the subsurface thermal inertia optimized in this study correlated well with the volumetric soil water content in a shallow layer on a daily basis, which suggests that thermal inertia can be an indicator of water conditions in a shallow subsurface layer. Spatial distribution of estimated sensible and latent heat after rainfall on successive summer days is discussed.

  5. Application of Neumann-Kopp rule for the estimation of heat capacity of mixed oxides

    Energy Technology Data Exchange (ETDEWEB)

    Leitner, J., E-mail: jindrich.leitner@vscht.cz [Department of Solid State Engineering, Institute of Chemical Technology Prague, Technicka 5, 166 28 Prague 6 (Czech Republic); Vonka, P. [Department of Physical Chemistry, Institute of Chemical Technology Prague, Technicka 5, 166 28 Prague 6 (Czech Republic); Sedmidubsky, D. [Department of Inorganic Chemistry, Institute of Chemical Technology Prague, Technicka 5, 166 28 Prague 6 (Czech Republic); European Commission, JRC, Institute for Transuranium Elements, Postbox 2340, D-76125 Karlsruhe (Germany); Svoboda, P. [Department of Condensed Matter Physics, Faculty of Mathematics and Physics, Charles University, Ke Karlovu 5, 120 00 Prague 2 (Czech Republic)

    2010-01-10

    The empirical Neumann-Kopp rule (NKR) for the estimation of temperature dependence of heat capacity of mixed oxide is analyzed. NKR gives a reasonable estimate of C{sub pm} for most mixed oxides around room temperature, but at both low and high temperatures the accuracy of the estimate is substantially lowered. At very low temperatures, the validity of NKR is shown to be predominantly determined by the relation between the characteristic Debye and Einstein temperatures of a mixed oxide and its constituents. At high temperatures, the correlation between their molar volumes, volume expansion coefficients and compressibilities takes the dominance. In cases where the formation of a mixed oxide is not accompanied by any volume change, the difference between dilatation contributions to heat capacity of a mixed oxide and its constituents is exclusively negative. It turns out that in the high-temperature range, where the contribution of harmonic lattice vibrations approached the 3NR limit, {Delta}{sub ox}C{sub p} assumes negative values. For more complex oxides whose heat capacity has contributions from terms such as magnetic ordering, electronic excitations, the applicability of NKR is only restricted to lattice and dilatation terms.

  6. Research on Adjoint Kernelled Quasidifferential

    Directory of Open Access Journals (Sweden)

    Si-Da Lin

    2014-01-01

    Full Text Available The quasidifferential of a quasidifferentiable function in the sense of Demyanov and Rubinov is not uniquely defined. Xia proposed the notion of the kernelled quasidifferential, which is expected to be a representative for the equivalence class of quasidifferentials. Although the kernelled quasidifferential is known to have good algebraic properties and geometric structure, it is still not very convenient for calculating the kernelled quasidifferentials of −f and minfi∣i∈a finite index set I, where f and fi are kernelled quasidifferentiable functions. In this paper, the notion of adjoint kernelled quasidifferential, which is well-defined for −f and minfi∣i∈I, is employed as a representative of the equivalence class of quasidifferentials. Some algebraic properties of the adjoint kernelled quasidifferential are given and the existence of the adjoint kernelled quasidifferential is explored by means of the minimal quasidifferential and the Demyanov difference of convex sets. Under some condition, a formula of the adjoint kernelled quasidifferential is presented.

  7. KERNELS THROUGH BIAS REDUCTION TECHNIQUE

    African Journals Online (AJOL)

    IMPROVING THE CHOICE OF HIGHER ORDER UNIVARIATE. KERNELS THROUGH BIAS REDUCTION TECHNIQUE. J. E. Osemwenkhae and J. I. Odiase. Department of Math ematics. University of Benin. Benin City, Nigeria. ABSTRACT. Within the last two decades, higher order nnivariate kernels ha/ve been under focus ...

  8. A new approach to the joined estimation of the heat generated by a semicontiunuous emulsion polymerization Qr and the overall heat exchange parameter UA

    Directory of Open Access Journals (Sweden)

    Freire F. B.

    2004-01-01

    Full Text Available This work is concerned with the coupled estimation of the heat generated by the reaction (Qr and the overall heat transfer parameter (UA during the terpolymerization of styrene, butyl acrylate and methyl methacrylate from temperature measurements and the reactor heat balance. By making specific assumptions about the dynamics of the evolution of UA and Q R, we propose a cascade of observers to successively estimate these two parameters without the need for additional measurements of on-line samples. One further aspect of our approach is that only the energy balance around the reactor was employed. It means that the flow rate of the cooling jacket fluid was not required.

  9. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    The wave-vector dependent shear viscosities for butane and freely jointed chains have been determined. The transverse momentum density and stress autocorrelation functions have been determined by equilibrium molecular dynamics in both atomic and molecular hydrodynamic formalisms. The density......, temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  10. Estimation of ground heat flux from soil temperature over a bare soil

    Science.gov (United States)

    An, Kedong; Wang, Wenke; Wang, Zhoufeng; Zhao, Yaqian; Yang, Zeyuan; Chen, Li; Zhang, Zaiyong; Duan, Lei

    2017-08-01

    Ground soil heat flux, G 0, is a difficult-to-measure but important component of the surface energy budget. Over the past years, many methods were proposed to estimate G 0; however, the application of these methods was seldom validated and assessed under different weather conditions. In this study, three popular models (force-restore, conduction-convection, and harmonic) and one widely used method (plate calorimetric), which had well performance in publications, were investigated using field data to estimate daily G 0 on clear, cloudy, and rainy days, while the gradient calorimetric method was regarded as the reference for assessing the accuracy. The results showed that harmonic model was well reproducing the G 0 curve for clear days, but it yielded large errors on cloudy and rainy days. The force-restore model worked well only under rainfall condition, but it was poor to estimate G 0 under rain-free conditions. On the contrary, the conduction-convection model was acceptable to determine G 0 under rain-free conditions, but it generated large errors on rainfall days. More importantly, the plate calorimetric method was the best to estimate G 0 under different weather conditions compared with the three models, but the performance of this method is affected by the placement depth of the heat flux plate. As a result, the heat flux plate was recommended to be buried as close as possible to the surface under clear condition. But under cloudy and rainy conditions, the plate placed at depth of around 0.075 m yielded G 0 well. Overall, the findings of this paper provide guidelines to acquire more accurate estimation of G 0 under different weather conditions, which could improve the surface energy balance in field.

  11. Robotic intelligence kernel

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  12. Flexible kernel memory.

    Directory of Open Access Journals (Sweden)

    Dimitri Nowicki

    Full Text Available This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.

  13. Experimental estimation of convective heat transfer coefficient from pulsating semi-confined impingement air slot jet by using inverse method

    Science.gov (United States)

    Farahani, Somayeh Davoodabadi; Kowsary, Farshad

    2017-09-01

    An experimental study on pulsating impingement semi-confined slot jet has been performed. The effect of pulsations frequency was examined for various Reynolds numbers and Nozzle to plate distances. Convective heat transfer coefficient is estimated using the measured temperatures in the target plate and conjugate gradient method with adjoint equation. Heat transfer coefficient in Re 3000), heat transfer coefficient is affected by the pulsation from particular frequency. In this study, the threshold Strouhal number (St) is 0.11. No significant heat transfer enhancement was obtained for St resistance is smaller each time due to the newly forming thermal boundary layers. Heat transfer coefficient increases due to decrease thermal resistance. This study shows that maximum enhancement in heat transfer due to pulsations occurs in St = 0.169. Results show the configuration geometry has an important effect on the heat transfer performances in pulsed impinging jet. Heat transfer enhancement can be described to reflect flow by the confinement plate.

  14. Estimating the Condition of the Heat Resistant Lining in an Electrical Reduction Furnace

    Directory of Open Access Journals (Sweden)

    Jan G. Waalmann

    1988-01-01

    Full Text Available This paper presents a system for estimating the condition of the heat resistant lining in an electrical reduction furnace for ferrosilicon. The system uses temperature measured with thermocouples placed on the outside of the furnace-pot. These measurements are used together with a mathematical model of the temperature distribution in the lining in a recursive least squares algorithm to estimate the position of 'the transformation front'. The system is part of a monitoring system which is being developed in the AIP-project: 'Condition monitoring of strongly exposed process equipment in thc ferroalloy industry'. The estimator runs on-line, and results arc presented in colour-graphics on a display unit. The goal is to locate the transformation front with an accuracy of +- 5cm.

  15. Structural observability analysis and EKF based parameter estimation of building heating models

    Directory of Open Access Journals (Sweden)

    D.W.U. Perera

    2016-07-01

    Full Text Available Research for enhanced energy-efficient buildings has been given much recognition in the recent years owing to their high energy consumptions. Increasing energy needs can be precisely controlled by practicing advanced controllers for building Heating, Ventilation, and Air-Conditioning (HVAC systems. Advanced controllers require a mathematical building heating model to operate, and these models need to be accurate and computationally efficient. One main concern associated with such models is the accurate estimation of the unknown model parameters. This paper presents the feasibility of implementing a simplified building heating model and the computation of physical parameters using an off-line approach. Structural observability analysis is conducted using graph-theoretic techniques to analyze the observability of the developed system model. Then Extended Kalman Filter (EKF algorithm is utilized for parameter estimates using the real measurements of a single-zone building. The simulation-based results confirm that even with a simple model, the EKF follows the state variables accurately. The predicted parameters vary depending on the inputs and disturbances.

  16. A single-probe heat pulse method for estimating sap velocity in trees.

    Science.gov (United States)

    López-Bernal, Álvaro; Testi, Luca; Villalobos, Francisco J

    2017-10-01

    Available sap flow methods are still far from being simple, cheap and reliable enough to be used beyond very specific research purposes. This study presents and tests a new single-probe heat pulse (SPHP) method for monitoring sap velocity in trees using a single-probe sensor, rather than the multi-probe arrangements used up to now. Based on the fundamental conduction-convection principles of heat transport in sapwood, convective velocity (Vh ) is estimated from the temperature increase in the heater after the application of a heat pulse (ΔT). The method was validated against measurements performed with the compensation heat pulse (CHP) technique in field trees of six different species. To do so, a dedicated three-probe sensor capable of simultaneously applying both methods was produced and used. Experimental measurements in the six species showed an excellent agreement between SPHP and CHP outputs for moderate to high flow rates, confirming the applicability of the method. In relation to other sap flow methods, SPHP presents several significant advantages: it requires low power inputs, it uses technically simpler and potentially cheaper instrumentation, the physical damage to the tree is minimal and artefacts caused by incorrect probe spacing and alignment are removed. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  17. Crustal heat production and estimate of terrestrial heat flow in central East Antarctica, with implications for thermal input to the East Antarctic ice sheet

    Directory of Open Access Journals (Sweden)

    J. W. Goodge

    2018-02-01

    Full Text Available Terrestrial heat flow is a critical first-order factor governing the thermal condition and, therefore, mechanical stability of Antarctic ice sheets, yet heat flow across Antarctica is poorly known. Previous estimates of terrestrial heat flow in East Antarctica come from inversion of seismic and magnetic geophysical data, by modeling temperature profiles in ice boreholes, and by calculation from heat production values reported for exposed bedrock. Although accurate estimates of surface heat flow are important as an input parameter for ice-sheet growth and stability models, there are no direct measurements of terrestrial heat flow in East Antarctica coupled to either subglacial sediment or bedrock. As has been done with bedrock exposed along coastal margins and in rare inland outcrops, valuable estimates of heat flow in central East Antarctica can be extrapolated from heat production determined by the geochemical composition of glacial rock clasts eroded from the continental interior. In this study, U, Th, and K concentrations in a suite of Proterozoic (1.2–2.0 Ga granitoids sourced within the Byrd and Nimrod glacial drainages of central East Antarctica indicate average upper crustal heat production (Ho of about 2.6  ±  1.9 µW m−3. Assuming typical mantle and lower crustal heat flux for stable continental shields, and a length scale for the distribution of heat production in the upper crust, the heat production values determined for individual samples yield estimates of surface heat flow (qo ranging from 33 to 84 mW m−2 and an average of 48.0  ±  13.6 mW m−2. Estimates of heat production obtained for this suite of glacially sourced granitoids therefore indicate that the interior of the East Antarctic ice sheet is underlain in part by Proterozoic continental lithosphere with an average surface heat flow, providing constraints on both geodynamic history and ice-sheet stability. The ages and geothermal

  18. Regularization and error estimates for asymmetric backward nonhomogeneous heat equations in a ball

    Directory of Open Access Journals (Sweden)

    Le Minh Triet

    2016-09-01

    Full Text Available The backward heat problem (BHP has been researched by many authors in the last five decades; it consists in recovering the initial distribution from the final temperature data. There are some articles [1,2,3] related the axi-symmetric BHP in a disk but the study in spherical coordinates is rare. Therefore, we wish to study a backward problem for nonhomogenous heat equation associated with asymmetric final data in a ball. In this article, we modify the quasi-boundary value method to construct a stable approximate solution for this problem. As a result, we obtain regularized solution and a sharp estimates for its error. At the end, a numerical experiment is provided to illustrate our method.

  19. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  20. Best estimate approach for the evaluation of critical heat flux phenomenon in the boiling water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kaliatka, Tadas; Kaliatka, Algirdas; Uspuras, Eudenijus; Vaisnoras, Mindaugas [Lithuanian Energy Institute, Kaunas (Lithuania); Mochizuki, Hiroyasu; Rooijen, W.F.G. van [Fukui Univ. (Japan). Research Inst. of Nuclear Engineering

    2017-05-15

    Because of the uncertainties associated with the definition of Critical Heat Flux (CHF), the best estimate approach should be used. In this paper the application of best-estimate approach for the analysis of CHF phenomenon in the boiling water reactors is presented. At first, the nodalization of RBMK-1500, BWR-5 and ABWR fuel assemblies were developed using RELAP5 code. Using developed models the CHF and Critical Heat Flux Ratio (CHFR) for different types of reactors were evaluated. The calculation results of CHF were compared with the well-known experimental data for light water reactors. The uncertainty and sensitivity analysis of ABWR 8 x 8 fuel assembly CHFR calculation result was performed using the GRS (Germany) methodology with the SUSA tool. Finally, the values of Minimum Critical Power Ratio (MCPR) were calculated for RBMK-1500, BWR-5 and ABWR fuel assemblies. The paper demonstrate how, using the results of sensitivity analysis, to receive the MCPR values, which covers all uncertainties and remains best estimated.

  1. Resolving the Mantle Heat Transfer Discrepancy by Reassessing Buoyancy Flux Estimates of Upwelling Plumes

    Science.gov (United States)

    Hoggard, Mark; Parnell-Turner, Ross; White, Nicky

    2017-04-01

    The size and relative importance of mantle plumes is a controversial topic within the geodynamics community. Numerical experiments of mantle convection suggest a wide range of possible behaviours, from minor plumelets through to large scale, whole-mantle upwellings. In terms of observations, recent seismic tomographic models have identified many large, broad plume-like features within the lower mantle. In contrast, existing estimates of buoyancy flux calculated from plume swells have suggested that these upwellings transfer a relatively minor amount of material and heat into the uppermost mantle. Here, we revisit these calculations of buoyancy flux using a global map of plume swells based upon new observations of dynamic topography. Usually, plume flux is calculated from the cross-sectional area of a swell multiplied by either plate velocity or spreading rate. A key assumption is that plume head material flows laterally at or below the velocity of the overriding plate. Published results are dominated by contributions from the Pacific Ocean and suggest that a total of ˜ 2 TW of heat is carried by plumes into the uppermost mantle. An alternative approach exploits swell volume scaled by a characteristic decay time, which removes the reliance on plate velocities. The main assumption of this method is that plumes are in quasi-steady state. In this study, we have applied this volumetric approach in a new global analysis. Our results indicate that the Icelandic plume has a buoyancy flux of ˜ 27 ± 4 Mg s-1 and the Hawaiian plume is ˜ 2.9 ± 0.6 Mg s-1. These revised values are consistent with independent geophysical constraints from the North Atlantic Ocean and Hawaii. All magmatic and amagmatic swells have been included, suggesting that the total heat flux carried to the base of the plates is ˜ 10 ± 2 TW. This revised value is a five-fold increase compared with previous estimates and provides an improved match to published predictions of basal heat flux across the

  2. Inverse heat conduction estimation of inner wall temperature fluctuations under turbulent penetration

    Science.gov (United States)

    Guo, Zhouchao; Lu, Tao; Liu, Bo

    2017-04-01

    Turbulent penetration can occur when hot and cold fluids mix in a horizontal T-junction pipe at nuclear plants. Caused by the unstable turbulent penetration, temperature fluctuations with large amplitude and high frequency can lead to time-varying wall thermal stress and even thermal fatigue on the inner wall. Numerous cases, however, exist where inner wall temperatures cannot be measured and only outer wall temperature measurements are feasible. Therefore, it is one of the popular research areas in nuclear science and engineering to estimate temperature fluctuations on the inner wall from measurements of outer wall temperatures without damaging the structure of the pipe. In this study, both the one-dimensional (1D) and the two-dimensional (2D) inverse heat conduction problem (IHCP) were solved to estimate the temperature fluctuations on the inner wall. First, numerical models of both the 1D and the 2D direct heat conduction problem (DHCP) were structured in MATLAB, based on the finite difference method with an implicit scheme. Second, both the 1D IHCP and the 2D IHCP were solved by the steepest descent method (SDM), and the DHCP results of temperatures on the outer wall were used to estimate the temperature fluctuations on the inner wall. Third, we compared the temperature fluctuations on the inner wall estimated by the 1D IHCP with those estimated by the 2D IHCP in four cases: (1) when the maximum disturbance of temperature of fluid inside the pipe was 3°C, (2) when the maximum disturbance of temperature of fluid inside the pipe was 30°C, (3) when the maximum disturbance of temperature of fluid inside the pipe was 160°C, and (4) when the fluid temperatures inside the pipe were random from 50°C to 210°C.

  3. Design And Performance Characteristics Of Palm Kernel Nuts Drier ...

    African Journals Online (AJOL)

    A cabinet drier with dimensions 0.82m x 0.45m x 0.52m, having four trays and capable of drying 4kg of palm kernel nuts per hour was constructed. A control circuit to regulate the temperature of the heating chamber was installed in the appropriate parts of the drier. Using electrical heating, hot air is produced and allowed to ...

  4. Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.

    Science.gov (United States)

    Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash

    2015-12-01

    In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels.

  5. NEAR SPICE KERNELS CRUISE3

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes the complete set of SPICE data for one NEAR mission phase in the form of SPICE kernels, which can be accessed using SPICE software available...

  6. Kernelized locality-sensitive hashing.

    Science.gov (United States)

    Kulis, Brian; Grauman, Kristen

    2012-06-01

    Fast retrieval methods are critical for many large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sublinear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several data sets, and show that it enables accurate and fast performance for several vision problems, including example-based object classification, local feature matching, and content-based retrieval.

  7. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  8. Estimation and optimization of heat transfer and overall presure drop for a shell and tube heat exchanger

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Bala Bhaskara [Dept. of Mechanical Engineering, SISTAM College, JNTU, Kakinada (India); Raju, V. Ramachandra [Dept. of Mechanical Engineering, JNTU, Kakinada (India); Deepak, B. B V. L. [Dept. of Industrial Design, National Institute of Technology, Rourkela (India)

    2017-01-15

    Most thermal/chemical industries are equipped with heat exchangers to enhance thermal efficiency. The performance of heat exchangers highly depends on design modifications in the tube side, such as the cross-sectional area, orientation, and baffle cut of the tube. However, these parameters do not exhibit a specific relation to determining the optimum design condition for shell and tube heat exchangers with a maximum heat transfer rate and reduced pressure drops. Accordingly, experimental and numerical simulations are performed for a heat exchanger with varying tube geometries. The heat exchanger considered in this investigation is a single-shell, multiple-pass device. A Generalized regression neural network (GRNN) is applied to generate a relation among the input and output process parameters for the experimental data sets. Then, an Artificial immune system (AIS) is used with GRNN to obtain optimized input parameters. Lastly, results are presented for the developed hybrid GRNN-AIS approach.

  9. Estimated work ability in warm outdoor environments depends on the chosen heat stress assessment metric

    Science.gov (United States)

    Bröde, Peter; Fiala, Dusan; Lemke, Bruno; Kjellstrom, Tord

    2017-04-01

    With a view to occupational effects of climate change, we performed a simulation study on the influence of different heat stress assessment metrics on estimated workability (WA) of labour in warm outdoor environments. Whole-day shifts with varying workloads were simulated using as input meteorological records for the hottest month from four cities with prevailing hot (Dallas, New Delhi) or warm-humid conditions (Managua, Osaka), respectively. In addition, we considered the effects of adaptive strategies like shielding against solar radiation and different work-rest schedules assuming an acclimated person wearing light work clothes (0.6 clo). We assessed WA according to Wet Bulb Globe Temperature (WBGT) by means of an empirical relation of worker performance from field studies (Hothaps), and as allowed work hours using safety threshold limits proposed by the corresponding standards. Using the physiological models Predicted Heat Strain (PHS) and Universal Thermal Climate Index (UTCI)-Fiala, we calculated WA as the percentage of working hours with body core temperature and cumulated sweat loss below standard limits (38 °C and 7.5% of body weight, respectively) recommended by ISO 7933 and below conservative (38 °C; 3%) and liberal (38.2 °C; 7.5%) limits in comparison. ANOVA results showed that the different metrics, workload, time of day and climate type determined the largest part of WA variance. WBGT-based metrics were highly correlated and indicated slightly more constrained WA for moderate workload, but were less restrictive with high workload and for afternoon work hours compared to PHS and UTCI-Fiala. Though PHS showed unrealistic dynamic responses to rest from work compared to UTCI-Fiala, differences in WA assessed by the physiological models largely depended on the applied limit criteria. In conclusion, our study showed that the choice of the heat stress assessment metric impacts notably on the estimated WA. Whereas PHS and UTCI-Fiala can account for

  10. How Reliable Are Heat Pulse Velocity Methods for Estimating Tree Transpiration?

    Directory of Open Access Journals (Sweden)

    Michael A. Forster

    2017-09-01

    Full Text Available Transpiration is a significant component of the hydrologic cycle and its accurate quantification is critical for modelling, industry, and policy decisions. Sap flow sensors provide a low cost and practical method to measure transpiration. Various methods to measure sap flow are available and a popular family of methods is known as heat pulse velocity (HPV. Theory on thermal conductance and convection, that underpins HPV methods, suggests transpiration can be directly estimated from sensor measurements without the need for laborious calibrations. To test this accuracy, transpiration estimated from HPV sensors is compared with an independent measure of plant water use such as a weighing lysimeter. A meta-analysis of the literature that explicitly tested the accuracy of a HPV sensors against an independent measure of transpiration was conducted. Data from linear regression analysis was collated where an R2 of 1 indicates perfect precision and a slope of 1 of the linear regression curve indicates perfect accuracy. The average R2 and slope from all studies was 0.822 and 0.860, respectively. However, the overall error, or deviation from real transpiration values, was 34.706%. The results indicate that HPV sensors are precise in correlating heat velocity with rates of transpiration, but poor in quantifying transpiration. Various sources of error in converting heat velocity into sap velocity and sap flow are discussed including probe misalignment, wound corrections, thermal diffusivity, stem water content, placement of sensors in sapwood, and scaling of point measurements to whole plants. Where whole plant water use or transpiration is required in a study, it is recommended that all sap flow sensors are calibrated against an independent measure of transpiration.

  11. Range Safety Application of Kernel Density Estimation

    Science.gov (United States)

    2010-01-01

    Weapons Systems Division of DSTO. He holds a Ph.D. in Aerospace Engineering from the University of Queensland where he also completed a Bachelor of...simulation of the missile, producing large sets of ground impacts for both nominal and off-nominal (i.e. failed) missile fly outs. One step in the proposed...data, but it is not exhaustive and highlights that the prediction of non-diagonal bandwidth matrices is a challenging and potentially fruitful area of

  12. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-06

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360{degree}, 180{degree}, and 90{degree} sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360{degree}, 180{degree}, and 90{degree} cases, respectively.

  13. Boundary conditions for gas flow problems from anisotropic scattering kernels

    Science.gov (United States)

    To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline

    2015-10-01

    The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.

  14. A One-Source Approach for Estimating Land Surface Heat Fluxes Using Remotely Sensed Land Surface Temperature

    Directory of Open Access Journals (Sweden)

    Yongmin Yang

    2017-01-01

    Full Text Available The partitioning of available energy between sensible heat and latent heat is important for precise water resources planning and management in the context of global climate change. Land surface temperature (LST is a key variable in energy balance process and remotely sensed LST is widely used for estimating surface heat fluxes at regional scale. However, the inequality between LST and aerodynamic surface temperature (Taero poses a great challenge for regional heat fluxes estimation in one-source energy balance models. To address this issue, we proposed a One-Source Model for Land (OSML to estimate regional surface heat fluxes without requirements for empirical extra resistance, roughness parameterization and wind velocity. The proposed OSML employs both conceptual VFC/LST trapezoid model and the electrical analog formula of sensible heat flux (H to analytically estimate the radiometric-convective resistance (rae via a quartic equation. To evaluate the performance of OSML, the model was applied to the Soil Moisture-Atmosphere Coupling Experiment (SMACEX in United States and the Multi-Scale Observation Experiment on Evapotranspiration (MUSOEXE in China, using remotely sensed retrievals as auxiliary data sets at regional scale. Validated against tower-based surface fluxes observations, the root mean square deviation (RMSD of H and latent heat flux (LE from OSML are 34.5 W/m2 and 46.5 W/m2 at SMACEX site and 50.1 W/m2 and 67.0 W/m2 at MUSOEXE site. The performance of OSML is very comparable to other published studies. In addition, the proposed OSML model demonstrates similar skills of predicting surface heat fluxes in comparison to SEBS (Surface Energy Balance System. Since OSML does not require specification of aerodynamic surface characteristics, roughness parameterization and meteorological conditions with high spatial variation such as wind speed, this proposed method shows high potential for routinely acquisition of latent heat flux estimation

  15. Learning Circulant Sensing Kernels

    Science.gov (United States)

    2014-03-01

    resolution OFDM channel estimation . Random convolutions can also be applied in some imaging systems in which convolutions either naturally arise or can be...Compressive sensing based high resolution channel estimation for OFDM system . To appear in IEEE Journal of Selected Topics in Signal Processing, Special...matrices, Tropp et al.[28] de- scribes a random filter for acquiring a signal x̄; Haupt et al.[12] describes a channel estimation problem to identify a

  16. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  17. Eddy heat flux across the Antarctic Circumpolar Current estimated from sea surface height standard deviation

    Science.gov (United States)

    Foppert, Annie; Donohue, Kathleen A.; Watts, D. Randolph; Tracey, Karen L.

    2017-08-01

    Eddy heat flux (EHF) is a predominant mechanism for heat transport across the zonally unbounded mean flow of the Antarctic Circumpolar Current (ACC). Observations of dynamically relevant, divergent, 4 year mean EHF in Drake Passage from the cDrake project, as well as previous studies of atmospheric and oceanic storm tracks, motivates the use of sea surface height (SSH) standard deviation, H*, as a proxy for depth-integrated, downgradient, time-mean EHF (>[EHF>¯>]) in the ACC. Statistics from the Southern Ocean State Estimate corroborate this choice and validate throughout the ACC the spatial agreement between H* and >[EHF>¯>] seen locally in Drake Passage. Eight regions of elevated >[EHF>¯>] are identified from nearly 23.5 years of satellite altimetry data. Elevated cross-front exchange usually does not span the full latitudinal width of the ACC in each region, implying a hand-off of heat between ACC fronts and frontal zones as they encounter the different >[EHF>¯>] hot spots along their circumpolar path. Integrated along circumpolar streamlines, defined by mean SSH contours, there is a convergence of ∮>[EHF>¯>] in the ACC: 1.06 PW enters from the north and 0.02 PW exits to the south. Temporal trends in low-frequency [EHF] are calculated in a running-mean sense using H* from overlapping 4 year subsets of SSH. Significant increases in downgradient [EHF] magnitude have occurred since 1993 at Kerguelen Plateau, Southeast Indian Ridge, and the Brazil-Malvinas Confluence, whereas the other five >[EHF>¯>] hot spots have insignificant trends of varying sign.

  18. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  19. Diabatic heating rate estimates from European Centre for Medium-Range Weather Forecasts analyses

    Science.gov (United States)

    Christy, John R.

    1991-01-01

    Vertically integrated diabatic heating rate estimates (H) calculated from 32 months of European Center for Medium-Range Weather Forecasts daily analyses (May 1985-December 1987) are determined as residuals of the thermodynamic equation in pressure coordinates. Values for global, hemispheric, zonal, and grid point H are given as they vary over the time period examined. The distribution of H is compared with previous results and with outgoing longwave radiation (OLR) measurements. The most significant negative correlations between H and OLR occur for (1) tropical and Northern-Hemisphere mid-latitude oceanic areas and (2) zonal and hemispheric mean values for periods less than 90 days. Largest positive correlations are seen in periods greater than 90 days for the Northern Hemispheric mean and continental areas of North Africa, North America, northern Asia, and Antarctica. The physical basis for these relationships is discussed. An interyear comparison between 1986 and 1987 reveals the ENSO signal.

  20. Simple equation for estimating actual evapotranspiration using heat units for wheat in arid regions

    Directory of Open Access Journals (Sweden)

    M.A. Salama

    2015-07-01

    Application of treatment (B resulted in highly significant increase in yield production of Gemmeza10 and Misr2 as compared to treatment (A. Grain yield of different wheat varieties grown under treatment (B could be ranked in the following descending order: Misr2 > Gemmeza10 > Sids12. While under treatment (A it could be arranged in the following descending order: Misr2 > Sids12 > Gemmeza10. On the other hand, the overall means indicated non-significant difference between all wheat verities. The highest values of water and irrigation use efficiency as well as heat use efficiency were obtained with treatment (B. The equation used in the present study is available to estimate ETa under arid climate with drip irrigation system.

  1. Reconciling heat-flux and salt-flux estimates at a melting ice-ocean interface

    CERN Document Server

    Keitzl, Thomas; Notz, Dirk

    2016-01-01

    The ratio of heat and salt flux is employed in ice-ocean models to represent ice-ocean interactions. In this study, this flux ratio is determined from direct numerical simulations of free convection beneath a melting, horizontal, smooth ice-ocean interface. We find that the flux ratio at the interface is three times as large as previously assessed based on turbulent-flux measurements in the field. As a consequence, interface salinities and melt rates are overestimated by up to 40\\% if they are based on the three-equation formulation. We also find that the interface flux ratio depends only very weakly on the far-field conditions of the flow. Lastly, our simulations indicate that estimates of the interface flux ratio based on direct measurements of the turbulent fluxes will be difficult because at the interface the diffusivities alone determine the mixing and the flux ratio varies with depth. As an alternative, we present a consistent evaluation of the flux ratio based on the total heat and salt fluxes across t...

  2. Do the risk factors for type 2 diabetes mellitus vary by location? A spatial analysis of health insurance claims in Northeastern Germany using kernel density estimation and geographically weighted regression.

    Science.gov (United States)

    Kauhl, Boris; Schweikart, Jürgen; Krafft, Thomas; Keste, Andrea; Moskwyn, Marita

    2016-11-03

    The provision of general practitioners (GPs) in Germany still relies mainly on the ratio of inhabitants to GPs at relatively large scales and barely accounts for an increased prevalence of chronic diseases among the elderly and socially underprivileged populations. Type 2 Diabetes Mellitus (T2DM) is one of the major cost-intensive diseases with high rates of potentially preventable complications. Provision of healthcare and access to preventive measures is necessary to reduce the burden of T2DM. However, current studies on the spatial variation of T2DM in Germany are mostly based on survey data, which do not only underestimate the true prevalence of T2DM, but are also only available on large spatial scales. The aim of this study is therefore to analyse the spatial distribution of T2DM at fine geographic scales and to assess location-specific risk factors based on data of the AOK health insurance. To display the spatial heterogeneity of T2DM, a bivariate, adaptive kernel density estimation (KDE) was applied. The spatial scan statistic (SaTScan) was used to detect areas of high risk. Global and local spatial regression models were then constructed to analyze socio-demographic risk factors of T2DM. T2DM is especially concentrated in rural areas surrounding Berlin. The risk factors for T2DM consist of proportions of 65-79 year olds, 80 + year olds, unemployment rate among the 55-65 year olds, proportion of employees covered by mandatory social security insurance, mean income tax, and proportion of non-married couples. However, the strength of the association between T2DM and the examined socio-demographic variables displayed strong regional variations. The prevalence of T2DM varies at the very local level. Analyzing point data on T2DM of northeastern Germany's largest health insurance provider thus allows very detailed, location-specific knowledge about increased medical needs. Risk factors associated with T2DM depend largely on the place of residence of the

  3. Do the risk factors for type 2 diabetes mellitus vary by location? A spatial analysis of health insurance claims in Northeastern Germany using kernel density estimation and geographically weighted regression

    Directory of Open Access Journals (Sweden)

    Boris Kauhl

    2016-11-01

    Full Text Available Abstract Background The provision of general practitioners (GPs in Germany still relies mainly on the ratio of inhabitants to GPs at relatively large scales and barely accounts for an increased prevalence of chronic diseases among the elderly and socially underprivileged populations. Type 2 Diabetes Mellitus (T2DM is one of the major cost-intensive diseases with high rates of potentially preventable complications. Provision of healthcare and access to preventive measures is necessary to reduce the burden of T2DM. However, current studies on the spatial variation of T2DM in Germany are mostly based on survey data, which do not only underestimate the true prevalence of T2DM, but are also only available on large spatial scales. The aim of this study is therefore to analyse the spatial distribution of T2DM at fine geographic scales and to assess location-specific risk factors based on data of the AOK health insurance. Methods To display the spatial heterogeneity of T2DM, a bivariate, adaptive kernel density estimation (KDE was applied. The spatial scan statistic (SaTScan was used to detect areas of high risk. Global and local spatial regression models were then constructed to analyze socio-demographic risk factors of T2DM. Results T2DM is especially concentrated in rural areas surrounding Berlin. The risk factors for T2DM consist of proportions of 65–79 year olds, 80 + year olds, unemployment rate among the 55–65 year olds, proportion of employees covered by mandatory social security insurance, mean income tax, and proportion of non-married couples. However, the strength of the association between T2DM and the examined socio-demographic variables displayed strong regional variations. Conclusion The prevalence of T2DM varies at the very local level. Analyzing point data on T2DM of northeastern Germany’s largest health insurance provider thus allows very detailed, location-specific knowledge about increased medical needs. Risk factors

  4. Prediction of nonlinear time series by kernel regression smoothing

    NARCIS (Netherlands)

    Borovkova, S; Burton, R; Dehling, H; Prochazka, A; Uhlir, J; Sovka, P

    1997-01-01

    We address the problem of prediction of nonlinear time series by kernel estimation of autoregression, and introduce a variation of this method. We apply this method to an experimental time series and compare its performance with predictions by feed-forward neural networks as well as with fitting a

  5. Simultaneous estimation of strength and position of a heat source in a participating medium using DE algorithm

    Science.gov (United States)

    Parwani, Ajit K.; Talukdar, Prabal; Subbarao, P. M. V.

    2013-09-01

    An inverse heat transfer problem is discussed to estimate simultaneously the unknown position and timewise varying strength of a heat source by utilizing differential evolution approach. A two dimensional enclosure with isothermal and black boundaries containing non-scattering, absorbing and emitting gray medium is considered. Both radiation and conduction heat transfer are included. No prior information is used for the functional form of timewise varying strength of heat source. The finite volume method is used to solve the radiative transfer equation and the energy equation. In this work, instead of measured data, some temperature data required in the solution of the inverse problem are taken from the solution of the direct problem. The effect of measurement errors on the accuracy of estimation is examined by introducing errors in the temperature data of the direct problem. The prediction of source strength and its position by the differential evolution (DE) algorithm is found to be quite reasonable.

  6. Uncertainties in the estimation of specific absorption rate during radiofrequency alternating magnetic field induced non-adiabatic heating of ferrofluids

    Science.gov (United States)

    Lahiri, B. B.; Ranoo, Surojit; Philip, John

    2017-11-01

    Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the

  7. Estimation of soil heat flux in a neotropical Wetland region using remote sensing techniques

    Directory of Open Access Journals (Sweden)

    Victor Hugo de Morais Danelichen

    2014-12-01

    Full Text Available The direct estimation of the soil heat flux (G by remote sensing data is not possible. For this, several models have been proposed empirically from the relation of G measures and biophysical parameters of various types of coverage or not vegetated in different places on earth. Thus, the objective of this study was to evaluate the relation between G/Rn ratio and biophysical variables obtained by satellite sensors and evaluate the parameterization of different models to estimate G spatially in three sites with different soil cover types. The net radiation (Rn and G were measured directly in two pastures at Miranda Farm and Experimental Farm and and Monodominant Forest of Cambará. Rn, G, and G/Rn ratio and MODIS products, such as albedo (α, surface temperature (LST, vegetation index (NDVI and leaf area index (LAI varied seasonally at all sites and inter-sites. The sites were different from each other by presenting different relation between measures of Rn, G and G/Rn ratio and biophysical parameters. Among the original models, the model proposed by Bastiaanssen (1995 showed the best performance with r = 0.76, d = 0.95, MAE = 5.70 W m-2 and RMSE = 33.68 W m-2. As the reparameterized models, correlation coefficients had no significant change, but the coefficient Willmott (d increased and the MAE and RMSE had a small decrease.

  8. An inverse problem of estimating the heat source in tapered optical fibers for scanning near-field optical microscopy.

    Science.gov (United States)

    Lee, Haw-Long; Chang, Win-Jin; Chen, Wen-Lih; Yang, Yu-Ching

    2007-08-01

    A conjugate gradient method based on inverse algorithm is applied in this study to estimate the unknown space- and time-dependent heat source in aluminum-coated tapered optical fibers for scanning near-field optical microscopy, by reading the transient temperature data at the measurement positions. No prior information is available on the functional form of the unknown heat source in the present study; thus, it is classified as the function estimation in inverse calculation. The accuracy of the inverse analysis is examined by using the simulated exact and inexact temperature measurements. Results show that an excellent estimation on the heat source and temperature distributions in the tapered optical fiber can be obtained for all the test cases considered in this study.

  9. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  10. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  11. Vertical Heat Flux in the Ocean: Estimates from Observations, and Comparisons with a Coupled General Circulation Model

    Science.gov (United States)

    Cummins, P. F.; Masson, D.; Saenko, O.

    2016-02-01

    The net heat uptake by the ocean in a changing climate involves small imbalances between the advective and diffusive processes that transport heat vertically. Generally, it is necessary to rely on global climate models to study these processes in detail. In the present study, it is shown that a key component of the vertical heat flux, namely that associated with the large-scale mean vertical circulation, can be diagnosed over extra-tropical regions from global observational data sets. This component is estimated based on the vertical velocity obtained from the geostrophic vorticity balance, combined with estimates of the absolute geostrophic flow. Results are compared with a non-eddy resolving, coupled atmosphere-ocean general circulation model. This shows reasonable agreement in the latitudinal distribution of the heat flux, along with net integrated vertical heat flux below about 300 meters depth. The mean vertical heat flux is shown to be dominated by the downward contribution from the southern hemisphere and, in particular, the Southern Ocean. This is driven by the Ekman vertical velocity which induces an upward vertical transport of seawater that is cold relative to the lateral average at a given depth. The correspondence with the coupled model breaks down at depths shallower than 300 m due to the dominant contribution of equatorial regions which have been excluded from the calculation. It appears that the vertical transport of heat by the large-scale mean circulation is consistent with simple linear vorticity dynamics over much of the ocean.

  12. A three-dimensional inverse problem in estimating the internal heat flux of housing for high speed motors

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Cheng-Hung; Lo, Hung-Chi [Department of Systems and Naval Mechatronic Engineering, National Cheng Kung University, 1 Ta-Hsueh Road, Tainan 701, (Taiwan)

    2006-10-15

    The time-dependent heat flux generated in rotor and stator for the high speed electric motor is determined in this three-dimensional inverse heat conduction problem. The inverse algorithm utilizing the Steepest Descent Method (SDM) and a general purpose commercial code CFX4.4 is applied successfully in the present study in accordance with the simulated measured temperature distributions on some proper exterior surfaces. No cooling systems can be designed before the heat fluxes are estimated and identified. Two different functional forms for heat fluxes with different temperature measurement errors are used in the numerical experiments to illustrate the validity of the inverse algorithm. Results of the numerical simulation show that due to the structure of the cooling passages for motor housing, the estimated heat flux lying under the cooling passages is not accurate. However, when the concept of effective heat flux is applied, a reliable time-dependent heat flux can be obtained by using the present inverse algorithm. (author)

  13. Heat

    CERN Document Server

    Lawrence, Ellen

    2016-01-01

    Is it possible to make heat by rubbing your hands together? Why does an ice cube melt when you hold it? In this title, students will conduct experiments to help them understand what heat is. Kids will also investigate concepts such as which materials are good at conducting heat and which are the best insulators. Using everyday items that can easily be found around the house, students will transform into scientists as they carry out step-by-step experiments to answer interesting questions. Along the way, children will pick up important scientific skills. Heat includes seven experiments with detailed, age-appropriate instructions, surprising facts and background information, a "conclusions" section to pull all the concepts in the book together, and a glossary of science words. Colorful, dynamic designs and images truly put the FUN into FUN-damental Experiments.

  14. Bayesian multimodel estimation of global terrestrial latent heat flux from eddy covariance, meteorological, and satellite observations

    Science.gov (United States)

    Yao, Yunjun; Liang, Shunlin; Li, Xianglan; Hong, Yang; Fisher, Joshua B.; Zhang, Nannan; Chen, Jiquan; Cheng, Jie; Zhao, Shaohua; Zhang, Xiaotong; Jiang, Bo; Sun, Liang; Jia, Kun; Wang, Kaicun; Chen, Yang; Mu, Qiaozhen; Feng, Fei

    2014-04-01

    Accurate estimation of the satellite-based global terrestrial latent heat flux (LE) at high spatial and temporal scales remains a major challenge. In this study, we introduce a Bayesian model averaging (BMA) method to improve satellite-based global terrestrial LE estimation by merging five process-based algorithms. These are the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product algorithm, the revised remote-sensing-based Penman-Monteith LE algorithm, the Priestley-Taylor-based LE algorithm, the modified satellite-based Priestley-Taylor LE algorithm, and the semi-empirical Penman LE algorithm. We validated the BMA method using data for 2000-2009 and by comparison with a simple model averaging (SA) method and five process-based algorithms. Validation data were collected for 240 globally distributed eddy covariance tower sites provided by FLUXNET projects. The validation results demonstrate that the five process-based algorithms used have variable uncertainty and the BMA method enhances the daily LE estimates, with smaller root mean square errors (RMSEs) than the SA method and the individual algorithms driven by tower-specific meteorology and Modern Era Retrospective Analysis for Research and Applications (MERRA) meteorological data provided by the NASA Global Modeling and Assimilation Office (GMAO), respectively. The average RMSE for the BMA method driven by daily tower-specific meteorology decreased by more than 5 W/m2 for crop and grass sites, and by more than 6 W/m2 for forest, shrub, and savanna sites. The average coefficients of determination (R2) increased by approximately 0.05 for most sites. To test the BMA method for regional mapping, we applied it for MODIS data and GMAO-MERRA meteorology to map annual global terrestrial LE averaged over 2001-2004 for spatial resolution of 0.05°. The BMA method provides a basis for generating a long-term global terrestrial LE product for characterizing global energy, hydrological, and carbon cycles.

  15. for palm kernel oil extraction

    African Journals Online (AJOL)

    user

    The oil could be used as a lubricant and an emulsifier [6]. It is an ingredient in paint making as a drying base, and in the manufacture of candles and soaps. [6, 7]. ..... of Bio-energy Potential of Palm Kernel Shell by. Physicochemical haracterization”, Nigerian Journal of Technology, Vol. 31, Number 3, pp 329-335. 2012. [4].

  16. Accelerating the Original Profile Kernel.

    Science.gov (United States)

    Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard

    2013-01-01

    One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.

  17. Accelerating the Original Profile Kernel.

    Directory of Open Access Journals (Sweden)

    Tobias Hamp

    Full Text Available One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.

  18. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  19. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  20. Near-Real Time Altimeter-Derived Estimates of Hurricane Heat Potential

    Science.gov (United States)

    Goni, G. J.; Black, P. G.; Cione, J. J.; Mainelli, M.; Trinanes, J. A.

    2001-12-01

    layer thickness, which is defined to go from the surface to the depth of the 20oC isotherm. Although there are many factors controlling the sea height anomaly, it is assumed here that most of its variability is due to changes in the thickness of the upper layer and to steric and barotropic effects. The thermal profiles are then constructed using near-real time altimeter-derived upper layer thickness from three altimeters by NOAA/NESDIS along with the sea surface temperature fields. Estimates of this parameter are posted daily during hurricane season to help forecasters and scientists on identifying regions of high hurricane heat potential and possible hurricane intensification.

  1. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  2. Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

    DEFF Research Database (Denmark)

    Quinonero, Joaquin; Girard, Agathe; Larsen, Jan

    2003-01-01

    The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...

  3. Digestibility of solvent-treated Jatropha curcas kernel by broiler chickens in Senegal.

    Science.gov (United States)

    Nesseim, Thierry Daniel Tamsir; Dieng, Abdoulaye; Mergeai, Guy; Ndiaye, Saliou; Hornick, Jean-Luc

    2015-12-01

    Jatropha curcas is a drought-resistant shrub belonging to the Euphorbiaceae family. The kernel contains approximately 60 % lipid in dry matter, and the meal obtained after oil extraction could be an exceptional source of protein for family poultry farming, in the absence of curcin and, especially, some diterpene derivatives phorbol esters that are partially lipophilic. The nutrient digestibility of J. curcas kernel meal (JKM), obtained after partial physicochemical deoiling was thus evaluated in broiler chickens. Twenty broiler chickens, 6 weeks old, were maintained in individual metabolic cages and divided into four groups of five animals, according to a 4 × 4 Latin square design where deoiled JKM was incorporated into grinded corn at 0, 4, 8, and 12 % levels (diets 0, 4, 8, and 12 J), allowing measurement of nutrient digestibility by the differential method. The dry matter (DM) and organic matter (OM) digestibility of diets was affected to a low extent by JKM (85 and 86 % in 0 J and 81 % in 12 J, respectively) in such a way that DM and OM digestibility of JKM was estimated to be close to 50 %. The ether extract (EE) digestibility of JKM remained high, at about 90 %, while crude protein (CP) and crude fiber (CF) digestibility were largely impacted by JKM, with values closed to 40 % at the highest levels of incorporation. J. curcas kernel presents various nutrient digestibilities but has adverse effects on CP and CF digestibility of the diet. The effects of an additional heat or biological treatment on JKM remain to be assessed.

  4. Temperature based validation of the analytical model for the estimation of the amount of heat generated during friction stir welding

    Directory of Open Access Journals (Sweden)

    Milčić Dragan S.

    2012-01-01

    Full Text Available Friction stir welding is a solid-state welding technique that utilizes thermomechanical influence of the rotating welding tool on parent material resulting in a monolith joint - weld. On the contact of welding tool and parent material, significant stirring and deformation of parent material appears, and during this process, mechanical energy is partially transformed into heat. Generated heat affects the temperature of the welding tool and parent material, thus the proposed analytical model for the estimation of the amount of generated heat can be verified by temperature: analytically determined heat is used for numerical estimation of the temperature of parent material and this temperature is compared to the experimentally determined temperature. Numerical solution is estimated using the finite difference method - explicit scheme with adaptive grid, considering influence of temperature on material's conductivity, contact conditions between welding tool and parent material, material flow around welding tool, etc. The analytical model shows that 60-100% of mechanical power given to the welding tool is transformed into heat, while the comparison of results shows the maximal relative difference between the analytical and experimental temperature of about 10%.

  5. Delimiting Areas of Endemism through Kernel Interpolation

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  6. Estimating the workpiece-backingplate heat transfer coefficient in friction stirwelding

    DEFF Research Database (Denmark)

    Larsen, Anders; Stolpe, Mathias; Hattel, Jesper Henri

    2012-01-01

    Purpose - The purpose of this paper is to determine the magnitude and spatial distribution of the heat transfer coefficient between the workpiece and the backingplate in a friction stir welding process using inverse modelling. Design/methodology/approach - The magnitude and distribution of the heat...... in an inverse modeling approach to determine the heat transfer coefficient in friction stir welding. © Emerald Group Publishing Limited....... yields optimal values for the magnitude and distribution of the heat transfer coefficient. Findings - It is found that the heat transfer coefficient between the workpiece and the backingplate is non-uniform and takes its maximum value in a region below the welding tool. Four different parameterisations...

  7. Input Space Regularization Stabilizes Pre-images for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2009-01-01

    Solution of the pre-image problem is key to efficient nonlinear de-noising using kernel Principal Component Analysis. Pre-image estimation is inherently ill-posed for typical kernels used in applications and consequently the most widely used estimation schemes lack stability. For de-noising appli...... mapping is non-linear, however, by applying a simple input space distance regularizer we can reduce variability with very limited sacrifice in terms of de-noising efficiency....

  8. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  9. An Assessment of Transport Property Estimation Methods for Ammonia–Water Mixtures and Their Influence on Heat Exchanger Size

    DEFF Research Database (Denmark)

    Kærn, Martin Ryhl; Modi, Anish; Jensen, Jonas Kjær

    2015-01-01

    Transport properties of fluids are indispensable for heat exchanger design. The methods for estimating the transport properties of ammonia–water mixtures are not well established in the literature. The few existent methods are developed from none or limited, sometimes inconsistent experimental...... of ammonia–water mixtures. Firstly, the different methods are introduced and compared at various temperatures and pressures. Secondly, their individual influence on the required heat exchanger size (surface area) is investigated. For this purpose, two case studies related to the use of the Kalina cycle...... the interpolative methods in contrast to the corresponding state methods. Nevertheless, all possible mixture transport property combinations used herein resulted in a heat exchanger size within 4.3 % difference for the flue-gas heat recovery boiler, and within 12.3 % difference for the oil-based boiler....

  10. Comparison of sensible heat flux estimates using AVHRR with scintillometer measurements over semi-arid grassland in northwest Mexico

    NARCIS (Netherlands)

    Watts, C.J.; Chehbouni, A.; Rodriguez, J.C.; Kerr, Y.H.; Hartogensis, O.K.; Bruin, de H.A.R.

    2000-01-01

    The problems associated with the validation of satellite-derived estimates of the surface fluxes are discussed and the possibility of using the large aperture scintillometer is investigated. Simple models are described to derive surface temperature and sensible heat flux from the advanced very high

  11. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  12. Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    DEFF Research Database (Denmark)

    Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.

    2013-01-01

    correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...... developments to deal with the extreme cases of large-scale and low-sized problems. To illustrate the wide applicability of these methods in both classification and regression problems, we analyze their performance in a benchmark of publicly available data sets and pay special attention to specific real...

  13. Autonomic function assessment in Parkinson's disease patients using the kernel method and entrainment techniques.

    Science.gov (United States)

    Kamal, Ahmed K

    2007-01-01

    The experimental procedure of lowering and raising a leg while the subject is in the supine position is considered to stimulate and entrain the autonomic nervous system of fifteen untreated patients with Parkinson's disease and fifteen age and sex matched control subjects. The assessment of autonomic function for each group is achieved using an algorithm based on Volterra kernel estimation. By applying this algorithm and considering the process of lowering and raising a leg as stimulus input and the Heart Rate Variability signal (HRV) as output for system identification, a mathematical model is expressed as integral equations. The integral equations are considered and fixed for control subjects and Parkinson's disease patients so that the identification method reduced to the determination of the values within the integral called kernels, resulting in an integral equations whose input-output behavior is nearly identical to that of the system in both healthy subjects and Parkinson's disease patients. The model for each group contains the linear part (first order kernel) and quadratic part (second order kernel). A difference equation model was employed to represent the system for both control subjects and patients with Parkinson's disease. The results show significant difference in first order kernel(impulse response) and second order kernel (mesh diagram) for each group. Using first order kernel and second order kernel, it is possible to assess autonomic function qualitatively and quantitatively in both groups.

  14. Heat stress effects on farrowing rate in sows: genetic parameter estimation using within-line and crossbred models.

    Science.gov (United States)

    Bloemhof, S; Kause, A; Knol, E F; Van Arendonk, J A M; Misztal, I

    2012-07-01

    The pork supply chain values steady and undisturbed piglet production. Fertilization and maintaining gestation in warm and hot climates is a challenge that can be potentially improved by selection. The objective of this study was to estimate 1) genetic variation for farrowing rate of sows in 2 dam lines and their reciprocal cross; 2) genetic variation for farrowing rate heat tolerance, which can be defined as the random regression slope of farrowing rate against increasing temperature at day of insemination, and the genetic correlation between farrowing rate and heat tolerance; 3) genetic correlation between farrowing rate in purebreds and crossbreds; and 4) genetic correlation between heat tolerance in purebreds and crossbreds. The estimates were based on 93,969 first insemination records per cycle from 24,456 sows inseminated between January 2003 and July 2008. These sows originated from a Dutch purebred Yorkshire dam line (D), an International purebred Large White dam line (ILW), and from their reciprocal crosses (RC) raised in Spain and Portugal. Within-line and crossbred models were used for variance component estimation. Heritability estimates for farrowing rate were 0.06, 0.07, and 0.02 using within-line models for D, ILW, and RC, respectively, and 0.07, 0.07, and 0.10 using the crossbred model, respectively. For farrowing rate, purebred-crossbred genetic correlations were 0.57 between D and RC and 0.50 between ILW and RC. When including heat tolerance in the within-line model, heritability estimates for farrowing rate were 0.05, 0.08, and 0.03 for D, ILW, and RC, respectively. Heritability for heat tolerance at 29.3°C was 0.04, 0.02, and 0.05 for D, ILW, and RC, respectively. Genetic correlations between farrowing rate and heat tolerance tended to be negative in crossbreds and ILW-line sows, implying selection for increased levels of production traits, such as growth and reproductive output, is likely to increase environmental sensitivity. This study shows

  15. Validation of Temperature Histories for Structural Steel Welds Using Estimated Heat-Affected-Zone Edges

    Science.gov (United States)

    2016-10-12

    Parametric Envelopes for Stable Keyhole Plasma Arc Welding of a Titanium Alloy,” Journal of Strain Analysis for Engineering Design, 47(5), pp. 266- 275, 2012...welding, is simulation of the coupling of the heat source, which involves melting, fluid flow in the weld meltpool and heat transfer from the...generation of the solidification boundary, the surface from which heat is transferred into the HAZ, which is the region of most probable weld

  16. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  17. Nonlocal viscosity kernel of mixtures

    Science.gov (United States)

    Smith, Ben; Hansen, J. S.; Todd, B. D.

    2012-02-01

    In this Brief Report we investigate the multiscale hydrodynamical response of a liquid as a function of mixture composition. This is done via a series of molecular dynamics simulations in which the wave-vector-dependent viscosity kernel is computed for three mixtures, each with 7-15 different compositions. We observe that the viscosity kernel is dependent on composition for simple atomic mixtures for all the wave vectors studied here; however, for a molecular mixture the kernel is independent of composition for large wave vectors. The deviation from ideal mixing is also studied. Here it is shown that the Lorentz-Berthelot interaction rule follows ideal mixing surprisingly well for a large range of wave vectors, whereas for both the Kob-Andersen and molecular mixtures large deviations are found. Furthermore, for the molecular system the deviation is wave-vector dependent such that there exists a characteristic correlation length scale at which the ideal mixing goes from underestimating to overestimating the viscosity.

  18. A Bayesian analysis of sensible heat flux estimation: Quantifying uncertainty in meteorological forcing to improve model prediction

    KAUST Repository

    Ershadi, Ali

    2013-05-01

    The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.

  19. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

    2009-08-15

    The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

  20. Estimating heating times of wood boards, square timbers, and logs in saturated steam by multiple regression

    Science.gov (United States)

    William T. Simpson

    2006-01-01

    Heat sterilization is used to kill insects and fungi in wood being traded internationally. Determining the time required to reach the kill temperature is difficult considering the many variables that can affect it, such as heating temperature, target center temperature, initial wood temperature, wood configuration dimensions, specific gravity, and moisture content. In...

  1. ON REASONABLE ESTIMATE OF ENERGY PERFORMANCE OF THE RESIDENTIAL BUILDINGS SUSTENANCE WITH CENTRALIZED HEAT-SUPPLY SYSTEM

    Directory of Open Access Journals (Sweden)

    S. N. Osipov

    2016-01-01

    the period from 2006 to 2013, by virtue of the heat-supply schemes optimization and modernizing the heating systems using valuable (200–300 $US per 1 m though hugely effective preliminary coated pipes, the economy reached 2,7 MIO tons of fuel equivalent. Heat-energy general losses in municipal services of Belarus in March 2014 amounted up 17 %, whilst in 2001 they were at the level of 26 % and in 1990 – more than 30 %. With a glance to multi-staging and multifactorial nature (electricity, heat and water supply of the residential sector energy saving, the reasonable estimate of the residential buildings sustenance energy efficiency should be performed in tons of fuel equivalent in a unit of time.

  2. A novel kernel regularized nonhomogeneous grey model and its applications

    Science.gov (United States)

    Ma, Xin; Hu, Yi-sheng; Liu, Zhi-bin

    2017-07-01

    The nonhomogeneous grey model (NGM) is a novel tool for time series forecasting, which has attracted considerable interest of research. However, the existing nonhomogeneous grey models may be inefficient to predict the complex nonlinear time series sometimes due to the linearity of the differential or difference equations based on which these models are developed. In order to enhance the accuracy and applicability of the NGM model, the kernel method in the statistical learning theory has been utilized to build a novel kernel regularized nonhomogeneous grey model, which is abbreviated as the KRNGM model. The KRNGM model is represented by a differential equation which contains a nonlinear function of t. By constructing the regularized problem and using the kernel function which satisfies the Mercer's condition, the parameters estimation of KRNGM model only involves in solving a set of linear equations, and the nonlinear function in the KRNGM model can be expressed as a linear combination of the Lagrangian multipliers and the selected kernel function, and then the KRNGM model can be solved numerically. Two case studies of petroleum production forecasting are carried to illustrate the effectiveness of the KRNGM model, comparing to the existing nonhomogeneous models. The results show that the KRNGM model outperforms the existing NGM, ONGM, NDGM model significantly.

  3. Image texture analysis of crushed wheat kernels

    Science.gov (United States)

    Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.

    1992-03-01

    The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.

  4. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  5. Kernel support for the Wisconsin Wind Tunnel

    OpenAIRE

    Reinhardt, Steven K.; Falsafi, Babak; Wood, David A.

    1993-01-01

    This paper describes a kernel interface that provides an untrusted user-level process (an executive) with protected access to memory management functions, including the ability to create, manipulate, and execute within subservient contexts (address spaces). Page motion callbacks not only give the executive limited control over physical memory management, but also shift certain responsibilities out of the kernel, greatly reducing kernel state and complexity. The executive interface was motivat...

  6. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  7. The use of simple physiological and environmental measures to estimate the latent heat transfer in crossbred Holstein cows

    Science.gov (United States)

    Santos, Severino Guilherme Caetano Gonçalves dos; Saraiva, Edilson Paes; Pimenta Filho, Edgard Cavalcanti; Gonzaga Neto, Severino; Fonsêca, Vinicus França Carvalho; Pinheiro, Antônio da Costa; Almeida, Maria Elivania Vieira; de Amorim, Mikael Leal Cabral Menezes

    2017-02-01

    The aim of the present study was to estimate the heat transfer through cutaneous and respiratory evaporation of dairy cows raised in tropical ambient conditions using simple environmental and physiological measures. Twenty-six lactating crossbred cows (7/8 Holstein-Gir) were used, 8 predominantly white and 18 predominantly black. The environmental variables air temperature, relative humidity, black globe temperature, and wind speed were measured. Respiratory rate and coat surface temperature were measured at 0700, 0900, 1100, 1300, and 1500 h. The environmental and physiological data were used to estimate heat loss by respiratory (ER) and cutaneous evaporation (EC). Results showed that there was variation ( P cows kept confined in tropical ambient conditions.

  8. An inverse problem in simultaneous estimating the Biot numbers of heat and moisture transfer for a porous material

    Energy Technology Data Exchange (ETDEWEB)

    Cheng-Hung Huang; Chun-Ying Yeh [National Cheng Kung University, Tainan, Taiwan (China). Department of Naval Architecture and Marine Engineering

    2002-11-01

    A conjugate gradient method based inverse algorithm is applied in the present study in simultaneous determining the unknown time-dependent Biot numbers of heat and moisture transfer for a porous material based on interior measurements of temperature and moisture. It is assumed that no prior information is available on the functional form of the unknown Biot numbers in the present study, thus, it is classified as the function estimation in inverse calculation. The accuracy of this inverse heat and moisture transfer problem is examined by using the simulated exact and inexact temperature and moisture measurements in the numerical experiments. Results show that the estimation on the time-dependent Biot numbers can be obtained with any arbitrary initial guesses on a Pentium IV 1.4 GHz personal computer. (author)

  9. System identification via sparse multiple kernel-based regularization using sequential convex optimization techniques

    DEFF Research Database (Denmark)

    Chen, Tianshi; Andersen, Martin Skovgaard; Ljung, Lennart

    2014-01-01

    suitable for impulse response estimation, and equip the kernel-based regularization method with three features. First, multiple kernels can better capture complicated dynamics than single kernels. Second, the estimation of their weights by maximizing the marginal likelihood favors sparse optimal weights......, which enables this method to tackle various structure detection problems, e.g., the sparse dynamic network identification and the segmentation of linear systems. Third, the marginal likelihood maximization problem is a difference of convex programming problem. It is thus possible to find a locally...... optimal solution efficiently by using a majorization minimization algorithm and an interior point method where the cost of a single interior-point iteration grows linearly in the number of fixed kernels. Monte Carlo simulations show that the locally optimal solutions lead to good performance for randomly...

  10. Ground Heat Flux within the PMIP3/CMIP5 Last Millennium Simulations and Estimates from Geothermal Data

    Science.gov (United States)

    García-García, Almudena; José Cuesta-Valero, Francisco; Beltrami, Hugo; Mondéjar, Carlos; Finnis, Joel

    2017-04-01

    The proper simulation of the energy partitioning at the surface, both as storage within the ground and energy fluxes from the surface, is crucial for the accurate representation of land-surface processes and related climate feedback mechanisms (e.g. permafrost thaw and soil carbon stability). We analyze the changes in ground heat flux over the last millennium as simulated by the PMIP3/CMIP5 General Circulation Models (GCMs). The following three methods were used to estimate ground heat flux: 1) using the surface energy balance, that is from the difference between net-radiation, latent and sensible heat fluxes, 2) calculations based on Surface Air Temperature (SAT), Surface Temperature (ST) and Ground Surface Temperature at 0.5m and at 1m (GST), and 3) inferences from temperature at two soil depths (GST at 0.5m and GST at 1m). Results show large regional variability among models and methods. Global estimates of ground heat flux from the surface energy balance differ significantly from values obtained from geothermal data over the second half of the last century. Such disagreement may be indicative of a change in the partitioning of the energy within historical simulations of the PMIP3/CMIP5 GCMs. The lack of observational data and the challenges of measuring soil fluxes highlight the value of geothermal database as a potentially valuable source of information for evaluating long-term models performance.

  11. Estimation of peak heat flux onto the targets for CFETR with extended divertor leg

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chuanjia; Chen, Bin [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Xing, Zhe [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Wu, Haosheng [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Mao, Shifeng, E-mail: sfmao@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Luo, Zhengping; Peng, Xuebing [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Ye, Minyou [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China)

    2016-11-01

    Highlights: • A hypothetical geometry is assumed to extend the outer divertor leg in CFETR. • Density scan SOLPS simulation is done to study the peak heat flux onto target. • Attached–detached regime transition in out divertor occurs at lower puffing rate. • Unexpected delay of attached–detached regime transition occurs in inner divertor. - Abstract: China Fusion Engineering Test Reactor (CFETR) is now in conceptual design phase. CFETR is proposed as a good complement to ITER for demonstrating of fusion energy. Divertor is a crucial component which faces the plasmas and handles huge heat power for CFETR and future fusion reactor. To explore an effective way for heat exhaust, various methods to reduce the heat flux to divertor target should be considered for CFETR. In this work, the effect of extended out divertor leg on the peak heat flux is studied. The magnetic configuration of the long leg divertor is obtained by EFIT and Tokamak Simulation Code (TSC), while a hypothetical geometry is assumed to extend the out divertor leg as long as possible inside vacuum vessel. A SOLPS simulation is performed to study peak heat flux of the long leg divertor for CFETR. D{sub 2} gas puffing is used and increasing of the puffing rate means increase of plasma density. Both peak heat flux onto inner and outer targets are below 10 MW/m{sup 2} is achieved. A comparison between the peak heat flux between long leg and conventional divertor shows that an attached–detached regime transition of out divertor occurs at lower gas puffing gas puffing rate for long leg divertor. While for the inner divertor, even the configuration is almost the same, the situation is opposite.

  12. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  13. Heat-related deaths in hot cities: estimates of human tolerance to high temperature thresholds.

    Science.gov (United States)

    Harlan, Sharon L; Chowell, Gerardo; Yang, Shuo; Petitti, Diana B; Morales Butler, Emmanuel J; Ruddell, Benjamin L; Ruddell, Darren M

    2014-03-20

    In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90-97 °F; 32.2-36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide.

  14. Heat Flux and Wall Temperature Estimates for the NASA Langley HIFiRE Direct Connect Rig

    Science.gov (United States)

    Cuda, Vincent, Jr.; Hass, Neal E.

    2010-01-01

    An objective of the Hypersonic International Flight Research Experimentation (HIFiRE) Program Flight 2 is to provide validation data for high enthalpy scramjet prediction tools through a single flight test and accompanying ground tests of the HIFiRE Direct Connect Rig (HDCR) tested in the NASA LaRC Arc Heated Scramjet Test Facility (AHSTF). The HDCR is a full-scale, copper heat sink structure designed to simulate the isolator entrance conditions and isolator, pilot, and combustor section of the HIFiRE flight test experiment flowpath and is fully instrumented to assess combustion performance over a range of operating conditions simulating flight from Mach 5.5 to 8.5 and for various fueling schemes. As part of the instrumentation package, temperature and heat flux sensors were provided along the flowpath surface and also imbedded in the structure. The purpose of this paper is to demonstrate that the surface heat flux and wall temperature of the Zirconia coated copper wall can be obtained with a water-cooled heat flux gage and a sub-surface temperature measurement. An algorithm was developed which used these two measurements to reconstruct the surface conditions along the flowpath. Determinations of the surface conditions of the Zirconia coating were conducted for a variety of conditions.

  15. Estimating Temperature Rise Due to Flashlamp Heating Using Irreversible Temperature Indicators

    Science.gov (United States)

    Koshti, Ajay M.

    1999-01-01

    One of the nondestructive thermography inspection techniques uses photographic flashlamps. The flashlamps provide a short duration (about 0.005 sec) heat pulse. The short burst of energy results in a momentary rise in the surface temperature of the part. The temperature rise may be detrimental to the top layer of the part being exposed. Therefore, it is necessary to ensure the nondestructive nature of the technique. Amount of the temperature rise determines whether the flashlamp heating would be detrimental to the part. A direct method for the temperature measurement is to use of an infrared pyrometer that has much shorter response time than the flash duration. In this paper, an alternative technique is given using the irreversible temperature 'indicators. This is an indirect technique and it measures the temperature rise on the irreversible temperature indicators and computes the incident heat flux. Once the heat flux is known, the temperature rise on the part can be computed. A wedge shaped irreversible temperature indicator for measuring the heat flux is proposed. A procedure is given to use the wedge indicator.

  16. An inverse natural convection problem of estimating the strength of a heat source

    Energy Technology Data Exchange (ETDEWEB)

    Park, H.M.; Chung, O.Y. [Sogang University, Seoul (D.P.R. of Korea). Dpt. of Chemical Engineering

    1999-12-01

    The inverse problem of determining the time-varying strength of a heat source, which causes natural convection in a two-dimensional cavity, is considered. The Boussinesq equation is used to model the natural convection induced by the heat source. The inverse natural convection problem is posed as a minimization problem of the least-square criterion, which is solved by a conjugate gradient method employing the adjoint equation to determine the descent direction. The present method solves the inverse natural convection problem accurately without any simplification of the governing Boussinesq equation. (author)

  17. Generalized Derivative Based Kernelized Learning Vector Quantization

    NARCIS (Netherlands)

    Schleif, Frank-Michael; Villmann, Thomas; Hammer, Barbara; Schneider, Petra; Biehl, Michael; Fyfe, Colin; Tino, Peter; Charles, Darryl; Garcia-Osoro, Cesar; Yin, Hujun

    2010-01-01

    We derive a novel derivative based version of kernelized Generalized Learning Vector Quantization (KGLVQ) as an effective, easy to interpret, prototype based and kernelized classifier. It is called D-KGLVQ and we provide generalization error bounds, experimental results on real world data, showing

  18. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely applicable. Therefore, their use is recommended instead of the popular polynomial kernels in general settings, where no information...

  19. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  20. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  1. Estimating the CO2 mitigation potential of horizontal Ground Source Heat Pumps in the UK

    Science.gov (United States)

    Garcia-Gonzalez, R.; Verhoef, A.; Vidale, P. L.; Gan, G.; Chong, A.; Clark, D.

    2012-04-01

    By 2020, the UK will need to generate 15% of its energy from renewables to meet our contribution to the EU renewable energy target. Heating and cooling systems of buildings account for 30%-50% of the global energy consumption; thus, alternative low-carbon technologies such as horizontal Ground Couple Heat Pumps (GCHPs) can contribute to the reduction of anthropogenic CO2 emissions. Horizontal GCHPs currently represent a small fraction of the total energy generation in the UK. However, the fact that semi-detached and detached dwellings represent approximately 40% of the total housing stocks in the UK could make the widespread implementation of this technology particularly attractive in the UK and so could significantly increase its renewable energy generation potential. Using a simulation model, we analysed the dynamic interactions between the environment, the horizontal GCHP heat exchanger and typical UK dwellings, as well as their combined effect on heat pump performance and CO2 mitigation potential. For this purpose, a land surface model (JULES, Joint UK Land Environment Simulator), which calculates coupled soil heat and water fluxes, was combined with a heat extraction model. The analyses took into account the spatio-temporal variability of soil properties (thermal and hydraulic) and meteorological variables, as well as different horizontal GCHP configurations and a variety of building loads and heat demands. Sensitivity tests were performed for four sites in the UK with different climate and soil properties. Our results show that an installation depth of 1.0m would give us higher heat extractions rates, however it would be preferable to install the pipes slightly deeper to avoid the seasonal influence of variable meteorological conditions. A value of 1.5m for the spacing between coils (S) for a slinky configuration type is recommended to avoid thermal disturbances between neighbouring coils. We also found that for larger values of the spacing between the coils

  2. Study on heat transfer rate of an osmotic heat pipe. 3rd Report. Estimation of heat transport limits; Shinto heat pipe no netsuyuso ni kansuru kenkyu. 3. Netsuyuso genkai no yosoku

    Energy Technology Data Exchange (ETDEWEB)

    Ipposhi, S.; Imura, H. [Kumamoto University, Kumamoto (Japan). Faculty of Engineering; Konya, K. [Oji Paper Co. Ltd., Tokyo (Japan); Yamamura, H. [Kyushu University, Fukuoka (Japan)

    1998-07-25

    This paper describes an experimental and theoretical study on the heat transport limits of an osmotic heat pipe operated under the atmospheric pressure, using aqueous polyethylene glycol 600 solution (0.1 - 1.0 kmol/m{sup 3}) as the working fluid and 18 tubular-type acetyl cellulose osmotic membranes. As a result, the correlation between the heat transport rate and the osmotic area was revealed, and the effects of the physical properties of the solution and the geometry (i.e. inside diameters of the flow lines, etc.) of the osmotic heat pipe on the heat transport rate were theoretically investigated. Also, the heat transport rate of the present osmotic heat pipe is about 85% compared with that under such an ideal condition that the solution of the average concentration is assumed to be filled in the solution loop. 4 refs., 9 figs., 1 tab.

  3. Estimation of lacustrine groundwater discharge using heat as a tracer and vertical hydraulic gradients – a comparison

    Directory of Open Access Journals (Sweden)

    S. Rudnick

    2015-03-01

    Full Text Available Lacustrine groundwater discharge (LGD can play a major role in water and nutrient balances of lakes. Unfortunately, studies often neglect this input path due to methodological difficulties in the determination. In a previous study we described a method which allows the estimation of LGD and groundwater recharge using hydraulic head data and groundwater net balances based on meteorological data. The aim of this study is to compare these results with discharge rates estimated by inverse modelling of heat transport using temperature profiles measured in lake bed sediments. We were able to show a correlation between the fluxes obtained with the different methods, although the time scales of the methods differ substantially. As a consequence, we conclude that the use of hydraulic head data and meteorologically-based groundwater net balances to estimate LGD is limited to time scales similar to the calibration period.

  4. Estimation of surface Latent Heat Fluxes from IRS-P4/MSMR ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging Solutions)

    latent heat flux from satellite data. Liu and Niiler. (1984) and Liu (1986) applied the bulk formula to retrieve the surface LHF using near surface para- meters derived from satellite data. This method, hereafter referred to as the LN method, uses geo- physical parameters such as winds, sea surface tem- perature and surface ...

  5. Estimation of surface Latent Heat Fluxes from IRS-P4/MSMR ...

    Indian Academy of Sciences (India)

    The brightness temperatures of the Microwave sensor MSMR (Multichannel Scanning Microwave Radiometer) launched in May 1999 onboard Indian Oceansat-1 IRS-P4 are used to develop a direct retrieval method for latent heat ux by multivariate regression technique. The MSMR measures the microwave radiances at 8 ...

  6. Electron-ion temperature ratio estimations in the summer polar mesosphere when subject to HF radio wave heating

    Science.gov (United States)

    Pinedo, H.; La Hoz, C.; Havnes, O.; Rietveld, M.

    2014-10-01

    We have inferred the electron temperature enhancements above mesospheric altitudes under Polar Mesospheric Summer Echoes (PMSE) conditions when the ionosphere is exposed to artificial HF radio wave heating. The proposed method uses the dependence of the radar cross section on the electron-to-ion temperature ratio to infer the heating factor from incoherent scatter radar (ISR) power measurements above 90 km. Model heating temperatures match our ISR estimations between 90 and 130 km with 0.94 Pearson correlation index. The PMSE strength measured by the MORRO MST radar is about 50% weaker during the heater-on period when the modeled electron-to-ion mesospheric temperature is approximately 10 times greater than the unperturbed value. No PMSE weakening is found when the mesospheric temperature enhancement is by a factor of three or less. The PMSE weakening and its absence are consistent with the modeled mesospheric electron temperatures. This consistency supports to the proposed method for estimating mesospheric electron temperatures achieved by independent MST and ISR radar measurements.

  7. Parametrically guided nonparametric density and hazard estimation with censored data

    OpenAIRE

    Talamakrouni, Majda; Van Keilegom, Ingrid; El Ghouch, Anouar

    2016-01-01

    The parametrically guided kernel smoother is a promising nonparametric estimation approach that aims to reduce the bias of the classical kernel density estimator without increasing its variance. Theoretically, the estimator is unbiased if a correct parametric guide is used, which can never be achieved by the classical kernel estimator even with an optimal bandwidth. The estimator is generalized to the censored data case and used for density and hazard function estimation. The asymptotic prope...

  8. CHARACTERIZATION OF BIO-OIL FROM PALM KERNEL SHELL PYROLYSIS

    OpenAIRE

    R. Ahmad; N. Hamidin; U.F.M. Ali; C.Z.A. Abidin

    2014-01-01

    Pyrolysis of palm kernel shell in a fixed-bed reactor was studied in this paper. The objectives were to investigate the effect of pyrolysis temperature and particle size on the products yield and to characterize the bio-oil product. In order to get the optimum pyrolysis parameters on bio-oil yield, temperatures of 350, 400, 450, 500 and 550 °C and particle sizes of 212–300 µm, 300–600 µm, 600µm–1.18 mm and 1.18–2.36 mm under a heating rate of 50 °C min-1 were investigated. The maximum bio-oil...

  9. The validity of the kinetic collection equation revisited – Part 2: Simulations for the hydrodynamic kernel

    Directory of Open Access Journals (Sweden)

    L. Alfonso

    2010-08-01

    Full Text Available The kinetic collection equation (KCE has been widely used to describe the evolution of the average droplet spectrum due to the collection process that leads to the development of precipitation in warm clouds. This deterministic, integro-differential equation only has analytic solution for very simple kernels. For more realistic kernels, the KCE needs to be integrated numerically. In this study, the validity time of the KCE for the hydrodynamic kernel is estimated by a direct comparison of Monte Carlo simulations with numerical solutions of the KCE. The simulation results show that when the largest droplet becomes separated from the smooth spectrum, the total mass calculated from the numerical solution of the KCE is not conserved and, thus, the KCE is no longer valid. This result confirms the fact that for kernels appropriate for precipitation development within warm clouds, the KCE can only be applied to the continuous portion of the mass distribution.

  10. Estimating the potential for industrial waste heat reutilization in urban district energy systems: method development and implementation in two Chinese provinces

    Science.gov (United States)

    Tong, Kangkang; Fang, Andrew; Yu, Huajun; Li, Yang; Shi, Lei; Wang, Yangjun; Wang, Shuxiao; Ramaswami, Anu

    2017-12-01

    Utilizing low-grade waste heat from industries to heat and cool homes and businesses through fourth generation district energy systems (DES) is a novel strategy to reduce energy use. This paper develops a generalizable methodology to estimate the energy saving potential for heating/cooling in 20 cities in two Chinese provinces, representing cold winter and hot summer regions respectively. We also conduct a life-cycle analysis of the new infrastructure required for energy exchange in DES. Results show that heating and cooling energy use reduction from this waste heat exchange strategy varies widely based on the mix of industrial, residential and commercial activities, and climate conditions in cities. Low-grade heat is found to be the dominant component of waste heat released by industries, which can be reused for both district heating and cooling in fourth generation DES, yielding energy use reductions from 12%–91% (average of 58%) for heating and 24%–100% (average of 73%) for cooling energy use in the different cities based on annual exchange potential. Incorporating seasonality and multiple energy exchange pathways resulted in energy savings reductions from 0%–87%. The life-cycle impact of added infrastructure was small (<3% for heating) and 1.9% ~ 6.5% (cooling) of the carbon emissions from fuel use in current heating or cooling systems, indicating net carbon savings. This generalizable approach to delineate waste heat potential can help determine suitable cities for the widespread application of industrial waste heat re-utilization.

  11. Kernel Multitask Regression for Toxicogenetics.

    Science.gov (United States)

    Bernard, Elsa; Jiao, Yunlong; Scornet, Erwan; Stoven, Veronique; Walter, Thomas; Vert, Jean-Philippe

    2017-10-01

    The development of high-throughput in vitro assays to study quantitatively the toxicity of chemical compounds on genetically characterized human-derived cell lines paves the way to predictive toxicogenetics, where one would be able to predict the toxicity of any particular compound on any particular individual. In this paper we present a machine learning-based approach for that purpose, kernel multitask regression (KMR), which combines chemical characterizations of molecular compounds with genetic and transcriptomic characterizations of cell lines to predict the toxicity of a given compound on a given cell line. We demonstrate the relevance of the method on the recent DREAM8 Toxicogenetics challenge, where it ranked among the best state-of-the-art models, and discuss the importance of choosing good descriptors for cell lines and chemicals. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI

    Science.gov (United States)

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-01

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  13. Estimation of effective thermal conductivity enhancement using foam in heat exchangers based on a new analytical model

    Directory of Open Access Journals (Sweden)

    Maryam Haghighi

    2010-03-01

    Full Text Available Thermal performance of open-cell metal foam has been investigated under low Reynolds number by comparing the heat transfer coefficient and thermal conductivity for the flow through a packed channel of high porosity metal foam to that of an open channel. In the case of Al-Air at porosity 0.971, the ratio of heat transfer coefficients is estimated to be 18.5 when the thermal conductivity ratio of foam matrix to fluid conductivity is 130. This demonstrates that the useusing of foam in the structure of conventional air coolers increases effective thermal conductivity, heat transfer coefficient and thermal performance considerably. To overcome the drawbacks of previous models, a new model to describe the effective thermal conductivity of foam was developed. The model estimates effective thermal conductivity based on a non-isotropic tetrakaidecahedron unit-cell and is not confined only to isotropic cases as in previous models. Effective thermal conductivity is a function of foam geometrical characteristics, including ligament length (L, length of the sides of horizontal square faces (b, inclination angle that defines the orientation of the hexagonal faces with respect to the rise direction (θ, porosity, size, shape of metal lump at ligament intersections and heat transfer direction. Changing dimensionless foam ligament radius or height (d from 0.1655 to 0.2126 for Reticulated vitreous foam -air (RVC-aAir at θ=π/4 and dimensionless spherical node diameter (e equal to 0.339, raises effective thermal conductivity by 31%. Moreover, increasing θ from π/4 to 0.4π for RVC-aAir at d=0.1655 and e=0.339 enhances effective thermal conductivity by 33%.

  14. Semi-empirical method for estimating the performance of direct gain passive solar heated buildings

    Energy Technology Data Exchange (ETDEWEB)

    Wray, W.O.; Balcomb, J.D.; McFarland, R.D.

    1979-01-01

    The sunspot code for performance analysis of direct gain passive solar heated buildings is used to calculate the annual solar fraction for two representative designs in ten American cities. The two representative designs involve a single thermal storage mass configuration which is evaluated with and without night insulation. In both cases the solar aperture is double glazed. The results of the detailed thermal network calculations are then correlated using the monthly solar load ratio method which has already been successfully applied to the analysis of both active solar heated buildings and passive thermal storage wall systems. The method is based on a correlation between the monthly solar heating fraction and the monthly solar load ratio. The monthly solar load ratio is defined as the ratio of the monthly solar energy transmitted through the glazing aperture to the building's monthly thermal load. The procedure using the monthly method for any location is discussed in detail. In addition, a table of annual performance results for 84 cities is presented, enabling the designer to bypass the monthly method for these locations.

  15. Particle and heat flux estimates in Proto-MPEX in Helicon Mode with IR imaging

    Science.gov (United States)

    Showers, M. A.; Biewer, T. M.; Caughman, J. B. O.; Donovan, D. C.; Goulding, R. H.; Rapp, J.

    2016-10-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) at Oak Ridge National Laboratory (ORNL) is a linear plasma device developing the plasma source concept for the Material Plasma Exposure eXperiment (MPEX), which will address plasma material interaction (PMI) science for future fusion reactors. To better understand how and where energy is being lost from the Proto-MPEX plasma during ``helicon mode'' operations, particle and heat fluxes are quantified at multiple locations along the machine length. Relevant diagnostics include infrared (IR) cameras, four double Langmuir probes (LPs), and in-vessel thermocouples (TCs). The IR cameras provide temperature measurements of Proto-MPEX's plasma-facing dump and target plates, located on either end of the machine. The change in surface temperature is measured over the duration of the plasma shot to determine the heat flux hitting the plates. The IR cameras additionally provide 2-D thermal load distribution images of these plates, highlighting Proto-MPEX plasma behaviors, such as hot spots. The LPs and TCs provide additional plasma measurements required to determine particle and heat fluxes. Quantifying axial variations in fluxes will help identify machine operating parameters that will improve Proto-MPEX's performance, increasing its PMI research capabilities. This work was supported by the U.S. D.O.E. contract DE-AC05-00OR22725.

  16. Effects of kernel weight and source-limitation on wheat grain yield ...

    African Journals Online (AJOL)

    The research was conducted under field condition in two different dates under less and more heated environments (two different sowing times). Also, source levels were manipulated through 50% spikelet removal at anthesis to evaluate cultivar source/sink limitations to kernel growth. The results depicted that grain yield, ...

  17. Determination of Bio-energy Potential of Palm Kernel Shell by ...

    African Journals Online (AJOL)

    Palm Kernel Shell (PKS) is an economically and environmentally sustainable raw material for renewable energy industry. To this vane its physicochemical properties were determined for its most viable application in Renewable Energy options such as bioenergy and biomass utilization. Its higher heating values determined ...

  18. Shallow thermal structure constrained by seafloor temperature and heat flow estimated from BSRs in the Nankai subduction zone

    Science.gov (United States)

    Ohde, A.; Otsuka, H.; Kioka, A.; Ashi, J.

    2015-12-01

    The Nankai Trough is a plate convergent boundary where earthquakes with a magnitude of 8 take place repeatedly. Thermal structure in subduction zones affects pore pressure and diagenesis such as consolidation, dewatering and cementation, and constrains physical properties of a fault-slip plane. In the Nankai subduction zone, existence of methane hydrate is confirmed from acoustic reflectors called the Bottom Simulating Reflectors (BSRs) which parallel the seafloor on seismic reflection images with high-amplitude and reverse-polarity waveforms. As a depth of BSR is theoretically constrained by subseafloor profiles of temperature and pressure, the BSR depths effectively produce subseafloor geothermal information over a wide area without heat flow probe penetration or in-situ borehole temperature measurement that is fragmentary. In this study, we aim at calculating precise two-dimensional shallow thermal structure. First, we investigate detailed distribution of the BSRs in the Nankai area ranging from offshore Tokai to Hyuga using two-dimensional multi-channel seismic reflection data. The BSR depths are then forwarded to estimate heat flow values. Second, we use a simple two-dimensional thermal modeling of Blackwell et al. [1980] that takes into account topographical effects of the seafloor roughness. We also employ additional boundary conditions constrained by seafloor temperature and the heat flow estimated from BSR depths. In order to confirm reliability of the modeled thermal structure, we additionally estimate the base of gas hydrate stability zone which is proved to almost equal to observational BSR depths. We find in the modeled thermal structure that the convex portions that are subject to cooling by cold bottom water, while depressions are less subject to the cooling from observational BSRs and theoretical calculation. The thermal structure gained here provides essential data for seismic simulations in subduction zones and for laboratory experiments as

  19. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.

    Science.gov (United States)

    Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi

    2015-04-22

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.

  20. Body segment differences in surface area, skin temperature and 3D displacement and the estimation of heat balance during locomotion in hominins.

    Science.gov (United States)

    Cross, Alan; Collard, Mark; Nelson, Andrew

    2008-06-18

    The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached.

  1. On the accuracy of the simple ocean data assimilation analysis for estimating heat Budgets of the Near-Surface Arabian Sea and Bay of Bengal

    Digital Repository Service at National Institute of Oceanography (India)

    Shenoi, S.S.C.; Shankar, D.; Shetye, S.R.

    The accuracy of data from the Simple Ocean Data Assimilation (SODA) model for estimating the heat budget of the upper ocean is tested in the Arabian Sea and the Bay of Bengal. SODA is able to reproduce the changes in heat content when...

  2. Kernel adaptive filtering a comprehensive introduction

    CERN Document Server

    Liu, Weifeng; Haykin, Simon

    2010-01-01

    Online learning from a signal processing perspective There is increased interest in kernel learning algorithms in neural networks and a growing need for nonlinear adaptive algorithms in advanced signal processing, communications, and controls. Kernel Adaptive Filtering is the first book to present a comprehensive, unifying introduction to online learning algorithms in reproducing kernel Hilbert spaces. Based on research being conducted in the Computational Neuro-Engineering Laboratory at the University of Florida and in the Cognitive Systems Laboratory at McMaster University, O

  3. Heat Transfer Mechanism of a Vertical Wall Inside a Two-Phase Closed Thermosiphon Evaporator and Its Estimation

    Science.gov (United States)

    O-Uchi, Masaki; Hirose, Koichi; Saito, Futami

    The inside heat transfer coefficient, overall heat transfer coefficient, and heat flow rate at the heating section of the thermosiphon were determined for each heating method. In order to observe the heat transfer mechanism in the evaporator, a thermosiphon unit made of glass was assembled and conducted separately. The results of these experiments with these two units are summarized as follows. (1) Nucleate boiling due to the internal heat transfer mechanism improves the heat transfer characteristics of the thermosiphon unit. Under the specific heating conditions with dropwise condensation, there are two types of heat transfer mechanism occur in the evaporator accompanying nucleate boiling, i. e. latent heat transfer and sensible heat transfer. (2) In the case of latent heat transfer, the inside heat transfer coefficient has an upper limit which can be used as a criterion to determine the type of internal heat transfer mechanism.

  4. The Kernel Energy Method: Construction of 3 & 4 tuple Kernels from a List of Double Kernel Interactions.

    Science.gov (United States)

    Huang, Lulu; Massa, Lou

    2010-12-01

    The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration.

  5. Bit Error-Rate Minimizing Detector for Amplify-and-Forward Relaying Systems Using Generalized Gaussian Kernel

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-01-01

    In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.

  6. Experimental estimation of the heat energy dissipated in a volume surrounding the tip of a fatigue crack

    Directory of Open Access Journals (Sweden)

    G. Meneghetti

    2016-01-01

    Full Text Available Fatigue crack initiation and propagation involve plastic strains that require some work to be done on the material. Most of this irreversible energy is dissipated as heat and consequently the material temperature increases. The heat being an indicator of the intense plastic strains occurring at the tip of a propagating fatigue crack, when combined with the Neuber’s structural volume concept, it might be used as an experimentally measurable parameter to assess the fatigue damage accumulation rate of cracked components. On the basis of a theoretical model published previously, in this work the heat energy dissipated in a volume surrounding the crack tip is estimated experimentally on the basis of the radial temperature profiles measured by means of an infrared camera. The definition of the structural volume in a fatigue sense is beyond the scope of the present paper. The experimental crack propagation tests were carried out on hot-rolled, 6-mm-thick AISI 304L stainless steel specimens subject to completely reversed axial fatigue loading.

  7. Uncertainty estimation in one-dimensional heat transport model for heterogeneous porous medium.

    Science.gov (United States)

    Chang, Ching-Min; Yeh, Hund-Der

    2014-01-01

    In many practical applications, the rates for ground water recharge and discharge are determined based on the analytical solution developed by Bredehoeft and Papadopulos (1965) to the one-dimensional steady-state heat transport equation. Groundwater flow processes are affected by the heterogeneity of subsurface systems; yet, the details of which cannot be anticipated precisely. There exists a great deal of uncertainty (variability) associated with the application of Bredehoeft and Papadopulos' solution (1965) to the field-scale heat transport problems. However, the quantification of uncertainty involved in such application has so far not been addressed, which is the objective of this wok. In addition, the influence of the statistical properties of log hydraulic conductivity field on the variability in temperature field in a heterogeneous aquifer is also investigated. The results of the analysis demonstrate that the variability (or uncertainty) in the temperature field increases with the correlation scale of the log hydraulic conductivity covariance function and the variability of temperature field also depends positively on the position. © 2013, National Ground Water Association.

  8. Measurement of temperature inside die and estimation of interfacial heat transfer coefficient in squeeze casting

    Directory of Open Access Journals (Sweden)

    Fei-fan Wang

    2017-11-01

    Full Text Available As an advanced near-net shape technology, squeeze casting is an excellent method for producing high integrity castings. Numerical simulation is a very effective method to optimize squeeze casting process, and the interfacial heat transfer coefficient (IHTC is an important boundary condition in numerical simulation. Therefore, the study of the IHTC is of great significance. In the present study, experiments were conducted and a “plate shape” aluminum alloy casting was cast in H13 steel die. In order to obtain accurate temperature readings inside the die, a special temperature sensor units (TSU was designed. Six 1 mm wide and 1 mm deep grooves were machined in the sensor unit for the placement of the thermocouples whose tips were welded to the end wall. Each groove was machined to terminate at a particular distance (1, 3, and 6 mm from the front end of the sensor unit. Based on the temperature measurements inside the die, the interfacial heat transfer coefficient (IHTC at the metal-die interface was determined by applying an inverse approach. The acquired data were processed by a low pass filtering method based on Fast Fourier Transform (FFT. The feature of the IHTC at the metal-die interface was discussed.

  9. Estimates of heat flux to material surfaces in Proto-MPEX with IR imaging

    Science.gov (United States)

    Showers, M.; Biewer, T. M.; Bigelow, T. S.; Caughman, J. B. O.; Donovan, D.; Goulding, R. H.; Gray, T. K.; Rapp, J.; Youchison, D. L.; Nygren, R. E.

    2015-11-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) at Oak Ridge National Laboratory (ORNL) is a linear plasma device with the primary purpose of developing the plasma source concept for the Material Plasma Exposure eXperiment (MPEX), which will address the plasma material interactions (PMI) science for future fusion reactors. New diagnostics for Proto-MPEX include an infrared (IR) camera, in-vessel thermocouples and ex-vessel fluoroptic probes. The IR camera and other diagnostics provide surface temperature measurements of Proto-MPEX's dump and target plates, located on either end of the machine, which are being exposed to plasma. The change in surface temperature is measured over the duration of the plasma shot to determine the heat flux hitting the plates. The IR camera additionally provides 2-D thermal load distribution images of these plates, highlighting Proto-MPEX plasma behaviors, such as hot spots. The plasma diameter on the dump plate is on the order of 15 cm. The combination of measured heat flux and the thermal load distribution gives information on the efficiency of Proto-MPEX as a plasma generating device. Machine operating parameters that will improve Proto-MPEX's performance may be identified, increasing its PMI research capabilities.

  10. Distance Based Multiple Kernel ELM: A Fast Multiple Kernel Learning Approach

    Directory of Open Access Journals (Sweden)

    Chengzhang Zhu

    2015-01-01

    Full Text Available We propose a distance based multiple kernel extreme learning machine (DBMK-ELM, which provides a two-stage multiple kernel learning approach with high efficiency. Specifically, DBMK-ELM first projects multiple kernels into a new space, in which new instances are reconstructed based on the distance of different sample labels. Subsequently, an l2-norm regularization least square, in which the normal vector corresponds to the kernel weights of a new kernel, is trained based on these new instances. After that, the new kernel is utilized to train and test extreme learning machine (ELM. Extensive experimental results demonstrate the superior performance of the proposed DBMK-ELM in terms of the accuracy and the computational cost.

  11. Bayesian fuzzy logic-based estimation of electron cyclotron heating (ECH) power deposition in MHD control systems

    Energy Technology Data Exchange (ETDEWEB)

    Davoudi, Mehdi, E-mail: mehdi.davoudi@polimi.it [Department of Electrical and Computer Engineering, Buein Zahra Technical University, Buein Zahra, Qazvin (Iran, Islamic Republic of); Davoudi, Mohsen, E-mail: davoudi@eng.ikiu.ac.ir [Department of Electrical Engineering, Imam Khomeini International University, Qazvin, 34148-96818 (Iran, Islamic Republic of)

    2017-06-15

    Highlights: • A couple of algorithms to diagnose if Electron Cyclotron Heating (ECH) power is deposited properly on the expected deposition minor radius are proposed. • The algorithms are based on Bayesian theory and Fuzzy logic. • The algorithms are tested on the off-line experimental data acquired from Frascati Tokamak Upgrade (FTU), Frascati, Italy. • Uncertainties and evidences derived from the combination of online information formed by the measured diagnostic data and the prior information are also estimated. - Abstract: In the thermonuclear fusion systems, the new plasma control systems use some measured on-line information acquired from different sensors and prior information obtained by predictive plasma models in order to stabilize magnetic hydro dynamics (MHD) activity in a tokamak. Suppression of plasma instabilities is a key issue to improve the confinement time of controlled thermonuclear fusion with tokamaks. This paper proposes a couple of algorithms based on Bayesian theory and Fuzzy logic to diagnose if Electron Cyclotron Heating (ECH) power is deposited properly on the expected deposition minor radius (r{sub DEP}). Both algorithms also estimate uncertainties and evidences derived from the combination of the online information formed by the measured diagnostic data and the prior information. The algorithms have been employed on a set of off-line ECE channels data which have been acquired from the experimental shot number 21364 at Frascati Tokamak Upgrade (FTU), Frascati, Italy.

  12. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  13. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun

    2014-04-21

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.

  14. Simple technique of estimating the performance of passive solar heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Balcomb, J.D.; McFarland, R.D.

    1978-01-01

    A method is presented for estimating the annual solar performance of a building using a passive thermal storage wall of the Trombe wall or water wall type with or without night insulation. Tables of performance parameters are given for 84 cities. The method is accurate to +-3% as compared with hour-by-hour computer simulations.

  15. Physical parameter estimation in spatial heat transport models with an application to food storage

    NARCIS (Netherlands)

    van Mourik, S.; Vries, Dirk; Ploegaert, Johan P. M.; Zwart, Heiko J.; Keesman, Karel J.

    Parameter estimation plays an important role in physical modelling, but can be problematic due to the complexity of spatiotemporal models that are used for analysis, control and design in industry. In this paper we aim to circumvent these problems by using a methodology that approximates a model, or

  16. Wavelet and Fractal Analysis of Remotely Sensed Surface Temperature with Applications to Estimation of Surface Sensible Heat Flux Density

    Science.gov (United States)

    Schieldge, John

    2000-01-01

    Wavelet and fractal analyses have been used successfully to analyze one-dimensional data sets such as time series of financial, physical, and biological parameters. These techniques have been applied to two-dimensional problems in some instances, including the analysis of remote sensing imagery. In this respect, these techniques have not been widely used by the remote sensing community, and their overall capabilities as analytical tools for use on satellite and aircraft data sets is not well known. Wavelet and fractal analyses have the potential to provide fresh insight into the characterization of surface properties such as temperature and emissivity distributions, and surface processes such as the heat and water vapor exchange between the surface and the lower atmosphere. In particular, the variation of sensible heat flux density as a function of the change In scale of surface properties Is difficult to estimate, but - in general - wavelets and fractals have proved useful in determining the way a parameter varies with changes in scale. We present the results of a limited study on the relationship between spatial variations in surface temperature distribution and sensible heat flux distribution as determined by separate wavelet and fractal analyses. We analyzed aircraft imagery obtained in the thermal infrared (IR) bands from the multispectral TIMS and hyperspectral MASTER airborne sensors. The thermal IR data allows us to estimate the surface kinetic temperature distribution for a number of sites in the Midwestern and Southwestern United States (viz., San Pedro River Basin, Arizona; El Reno, Oklahoma; Jornada, New Mexico). The ground spatial resolution of the aircraft data varied from 5 to 15 meters. All sites were instrumented with meteorological and hydrological equipment including surface layer flux measuring stations such as Bowen Ratio systems and sonic anemometers. The ground and aircraft data sets provided the inputs for the wavelet and fractal analyses

  17. Decadal Arctic surface atmosphere/ocean heat budgets and mass transport estimates from several atmospheric and oceanic reanalyses

    Science.gov (United States)

    Chepurin, gennaday; Carton, James

    2017-04-01

    The Arctic is undergoing dramatic changes associated with the loss of seasonal and permanent ice pack. By exposing the surface ocean to the atmosphere these changes dramatically increase surface exchange processes. In contrast, increases in freshwater and heat input decreases turbulent exchanges within the ocean. In this study we present results from an examination of changing ocean heat flux, storage, and transport during the 36 year period 1980-2015. To identify changes in the surface atmosphere we examine three atmospheric reanalyses: MERRA2, ERA-I, and JRA55. Significant differences in fluxes from these reanalyses arise due to the representation of clouds and water vapor. These differences provide an indication of the uncertainties in the historical record. Next we turn to the Simple Ocean Data Assimilation version 3 (SODA3) global ocean/sea ice reanalysis system to allow us to infer the full ocean circulation from the limited set of historical record of ocean observations. SODA3 has 10 km horizontal resolution in the Arctic and assimilates the full suite of historical marine temperature and salinity observations. To account for the uncertainties in atmospheric forcing, we repeat our analysis with each of the three atmospheric reanalyses. In the first part of the talk we review the climatological seasonal surface fluxes resulting from our reanalysis system, modified for consistency with the ocean observations, and the limits of what we can learn from the historical record. Next we compare the seasonal hydrography, heat, and mass transports with direct estimates from moorings. Finally we examine the impact on the Arctic climate of the changes in sea ice cover and variability and trends of ocean/sea ice heat storage and transport and their contributions to changes in the seasonal stratification of the Arctic Ocean.

  18. MODIS-Based Estimation of Terrestrial Latent Heat Flux over North America Using Three Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Xuanyu Wang

    2017-12-01

    Full Text Available Terrestrial latent heat flux (LE is a key component of the global terrestrial water, energy, and carbon exchanges. Accurate estimation of LE from moderate resolution imaging spectroradiometer (MODIS data remains a major challenge. In this study, we estimated the daily LE for different plant functional types (PFTs across North America using three machine learning algorithms: artificial neural network (ANN; support vector machines (SVM; and, multivariate adaptive regression spline (MARS driven by MODIS and Modern Era Retrospective Analysis for Research and Applications (MERRA meteorology data. These three predictive algorithms, which were trained and validated using observed LE over the period 2000–2007, all proved to be accurate. However, ANN outperformed the other two algorithms for the majority of the tested configurations for most PFTs and was the only method that arrived at 80% precision for LE estimation. We also applied three machine learning algorithms for MODIS data and MERRA meteorology to map the average annual terrestrial LE of North America during 2002–2004 using a spatial resolution of 0.05°, which proved to be useful for estimating the long-term LE over North America.

  19. Hyperellipsoidal statistical classifications in a reproducing kernel Hilbert space.

    Science.gov (United States)

    Liang, Xun; Ni, Zhihao

    2011-06-01

    Standard support vector machines (SVMs) have kernels based on the Euclidean distance. This brief extends standard SVMs to SVMs with kernels based on the Mahalanobis distance. The extended SVMs become a special case of the Euclidean distance when the covariance matrix in a reproducing kernel Hilbert space is degenerated to an identity. The Mahalanobis distance leads to hyperellipsoidal kernels and the Euclidean distance results in hyperspherical ones. In this brief, the Mahalanobis distance-based kernel in a reproducing kernel Hilbert space is developed systematically. Extensive experiments demonstrate that the hyperellipsoidal kernels slightly outperform the hyperspherical ones, with fewer SVs.

  20. NEAR SPICE KERNELS EROS/ORBIT

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes the complete set of SPICE data for one NEAR mission phase in the form of SPICE kernels, which can be accessed using SPICE software available...

  1. Sparse Bayesian modeling with adaptive kernel learning.

    Science.gov (United States)

    Tzikas, Dimitris G; Likas, Aristidis C; Galatsanos, Nikolaos P

    2009-06-01

    Sparse kernel methods are very efficient in solving regression and classification problems. The sparsity and performance of these methods depend on selecting an appropriate kernel function, which is typically achieved using a cross-validation procedure. In this paper, we propose an incremental method for supervised learning, which is similar to the relevance vector machine (RVM) but also learns the parameters of the kernels during model training. Specifically, we learn different parameter values for each kernel, resulting in a very flexible model. In order to avoid overfitting, we use a sparsity enforcing prior that controls the effective number of parameters of the model. We present experimental results on artificial data to demonstrate the advantages of the proposed method and we provide a comparison with the typical RVM on several commonly used regression and classification data sets.

  2. NEAR SPICE KERNELS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes the complete set of NEAR SPICE data files (kernel files'), which can be accessed using SPICE software. The SPICE data contain geometric and...

  3. CASSINI SPICE KERNELS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes the complete set of Cassini SPICE data files (kernel files''), which can be accessed using SPICE software. The SPICE data contains geometric...

  4. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  5. MESSENGER SPICE KERNELS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes the complete set of MESSENGER SPICE data files (''kernel files''), which can be accessed using SPICE software. The SPICE data contains...

  6. Ensemble Approach to Building Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...

  7. EPOXI SPICE KERNELS V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes the complete set of EPOXI SPICE data files (kernel files''), which can be accessed using SPICE software. The SPICE data contains geometric and...

  8. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    Intelligence and Artificial Neural Networks Symposium (TAINN 96. Citeseer, 1996. Erling D Andersen and Knud D Andersen. The mosek interior point optimizer...Zien, and Sören Sonnen- burg. Efficient and accurate lp-norm multiple kernel learning . In Advances in neural information processing systems, pages 997...JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au

  9. Covariance Kernels from Bayesian Generative Models

    OpenAIRE

    Seeger, Matthias

    2002-01-01

    We propose the framework of mutual information kernels for learning covariance kernels, as used in Support Vector machines and Gaussian process classifiers, from unlabeled task data using Bayesian techniques. We describe an implementation of this framework which uses variational Bayesian mixtures of factor analyzers in order to attack classification problems in high-dimensional spaces where labeled data is sparse, but unlabeled data is abundant.

  10. Iterative Reconstruction of Memory Kernels.

    Science.gov (United States)

    Jung, Gerhard; Hanke, Martin; Schmid, Friederike

    2017-06-13

    In recent years, it has become increasingly popular to construct coarse-grained models with non-Markovian dynamics to account for an incomplete separation of time scales. One challenge of a systematic coarse-graining procedure is the extraction of the dynamical properties, namely, the memory kernel, from equilibrium all-atom simulations. In this article, we propose an iterative method for memory reconstruction from dynamical correlation functions. Compared to previously proposed noniterative techniques, it ensures by construction that the target correlation functions of the original fine-grained systems are reproduced accurately by the coarse-grained system, regardless of time step and discretization effects. Furthermore, we also propose a new numerical integrator for generalized Langevin equations that is significantly more accurate than the more commonly used generalization of the velocity Verlet integrator. We demonstrate the performance of the above-described methods using the example of backflow-induced memory in the Brownian diffusion of a single colloid. For this system, we are able to reconstruct realistic coarse-grained dynamics with time steps about 200 times larger than those used in the original molecular dynamics simulations.

  11. FUSED KERNEL-SPLINE SMOOTHING FOR REPEATEDLY MEASURED OUTCOMES IN A GENERALIZED PARTIALLY LINEAR MODEL WITH FUNCTIONAL SINGLE INDEX.

    Science.gov (United States)

    Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia

    We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.

  12. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  13. Spatio-temporal soil heat flux estimates from satellite data; results for the AMMA experiment, Fakara supersite.

    Science.gov (United States)

    Verhoef, Anne; Ottlé, Catherine; Maignan, Fabienne; Murray, Ty; Saux-Picart, Stephane; Cappelaere, Bernard; Boulain, Nicolas; Demarty, J.; Zribi, Mehrez

    2010-05-01

    The soil heat flux, G, is an important component in the energy balance, especially for sparsely vegetated (semi)-arid regions. In order to obtain large-scale estimates of this flux, for example for land surface model (e.g. GCM) verification, scientists have to rely on remote sensing data. Unfortunately, in these cases G is often estimated using highly empirical methods. Examples are relationships between the ratio of G and net radiation, Rn, and surface variables such as leaf area index (LAI). Other approaches use surface temperature observations to get maximum G/Rn values. However, such approaches are not universal. In Murray and Verhoef 2007a&b we proposed to use a standard physical equation, involving a harmonic analysis of surface temperatures for the estimation of G, in combination with a simple, but theoretically derived, equation for soil thermal inertia (TI). This method does not require in situ instrumentation. Moreover, such an approach ensures a more universally applicable method than those derived from purely empirical studies. This method requires knowledge of soil texture, in combination with an estimate of near surface soil moisture content, SM, to obtain spatio-temporal variation in thermal inertia. To get the diurnal and seasonal shape of G we ideally need time series of soil surface temperature, Ts. However, when vegetation obscures the surface these are not available through remote sensing. Therefore, a direct relationship between the harmonic analysis of Ts (Hs) and the harmonic analysis (Hb) of the remotely observed brightness temperature, Tb, obtained from remote sensing equipment was used instead. This relationship was tested for 4 different UK crops in Murray and Verhoef (2007b). Knowledge of LAI, canopy extinction coefficient and IR sensor view angle is required to go from Hb to Hs. To account for phase lag differences between Hs and Hb a time delay of 1.5 hrs was used. Here, the method is used to calculate spatiotemporal soil heat fluxes

  14. Demonstration of a problem in estimating sensible heat loss from the respiratory tract by thermometry.

    Science.gov (United States)

    King, F G; Manson, H J; Snellen, J W; Chang, K S

    1984-07-01

    We have investigated sensible respiratory loss, which is usually taken as the product of expired volume and the temperature difference between inspired and expired air (VE X delta T). Air temperature was measured with a 0.122 mm copper-constantan thermocouple mounted in the mouthpiece of a T-piece breathing system, and expired volume with a pneumotachograph. Changing air temperature (delta T) at the mouth and expired air volume (VE) were recorded simultaneously while the subject voluntarily breathed at different tidal volumes and rates. Inspired temperatures were controlled at 12.05 degrees C, 21.80 degrees C and 25.74 degrees C at a low dewpoint temperature of 4-5 degrees C. Temperature volume "loops" were constructed using an x-y plotter. The areas of each "loop" and enclosing rectangle (VE X delta T) were measured. The difference was divided by the weight of the rectangle to give the percentage of overestimation of sensible heat loss, which ranged from 5.5 to 17.2 per cent. The error increased significantly with decreasing tidal volume and increasing respiratory rate.

  15. Estimating spatially distributed turbulent heat fluxes from high-resolution thermal imagery acquired with a UAV system.

    Science.gov (United States)

    Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten

    2017-05-19

    In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between

  16. Estimation of energetic efficiency of heat supply in front of the aircraft at supersonic accelerated flight. Part 1. Mathematical models

    Science.gov (United States)

    Latypov, A. F.

    2008-12-01

    Fuel economy at boost trajectory of the aerospace plane was estimated during energy supply to the free stream. Initial and final flight velocities were specified. The model of a gliding flight above cold air in an infinite isobaric thermal wake was used. The fuel consumption rates were compared at optimal trajectory. The calculations were carried out using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was built in the first part of the paper to estimate the ramjet thrust and specific impulse. A quadratic dependence on aerodynamic lift was used to estimate the aerodynamic drag of aircraft. The energy for flow heating was obtained at the expense of an equivalent reduction of the exergy of combustion products. The dependencies were obtained for increasing the range coefficient of cruise flight for different Mach numbers. The second part of the paper presents a mathematical model for the boost interval of the aircraft flight trajectory and the computational results for the reduction of fuel consumption at the boost trajectory for a given value of the energy supplied in front of the aircraft.

  17. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    Science.gov (United States)

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  18. Estimation of early fatigue damage in heat treated En-8 grade steel

    Science.gov (United States)

    Talukdar, P.; Sen, S. K.; Ghosh, A. K.

    1998-08-01

    Generally, the failure of major machinery parts is due to fatigue damage. Because of the structural inhomogeneity of metals, fatigue damage may sometimes occur significantly below the yield strength of the material due to microplastic deformation at low stress levels. Commercial En-8 grade steel (widely used for making secondary metalworking products) was used to estimate the fatigue damage response during cyclic loading nearer to the fatigue endurance limit. Estimation of fatigue damage was carried out with the aid of a nondestructive testing (NDT) method, that is, Elastosonic measurement of fatigue damping coefficient and slope of fatigue damping curves. Results indicate that fatigue damage increases in annealed En-8 steel with an increase in peak stress and with an increase in the number of cycles. However, for hardened and tempered En-8 steel, experimental results may not provide a true indication of fatigue damage during fatigue loading nearer to the endurance limit, most likely due to the more homogeneous structure. Generally, fatigue failure occurs in this grade of steel due to microcrack generation in the cementite of the pearlite phase of annealed steel.

  19. Irrigation scheduling of green areas based on soil moisture estimation by the active heated fiber optic distributed temperature sensing AHFO

    Science.gov (United States)

    Zubelzu, Sergio; Rodriguez-Sinobas, Leonor; Sobrino, Fernando; Sánchez, Raúl

    2017-04-01

    Irrigation programing determines when and how much water apply to fulfill the plant water requirements depending of its phenology stage and location, and soil water content. Thus, the amount of water, the irrigation time and the irrigation frequency are variables that must be estimated. Likewise, irrigation programing has been based in approaches such as: the determination of plant evapotranspiration and the maintenance of soil water status between a given interval or soil matrix potential. Most of these approaches are based on the measurements of soil water sensors (or tensiometers) located at specific points within the study area which lack of the spatial information of the monitor variable. The information provided in such as few points might not be adequate to characterize the soil water distribution in irrigation systems with poor water application uniformity and thus, it would lead to wrong decisions in irrigation scheduling. Nevertheless, it can be overcome if the active heating pulses distributed fiber optic temperature measurement (AHFO) is used. This estimates the temperature variation along a cable of fiber optic and then, it is correlated with the soil water content. This method applies a known amount of heat to the soil and monitors the temperature evolution, which mainly depends on the soil moisture content. Thus, it allows estimations of soil water content every 12.5 cm along the fiber optic cable, as long as 1500 m (with 2 % accuracy) , every second. This study presents the results obtained in a green area located at the ETSI Agronómica, Agroalimentaria y Biosistesmas in Madrid. The area is irrigated by an sprinkler irrigation system which applies water with low uniformity. Also, it has deployed and installation of 147 m of fiber optic cable at 15 cm depth. The Distribute Temperature Sensing unit was a SILIXA ULTIMA SR (Silixa Ltd, UK) with spatial and temporal resolution of 0.29 m and 1 s, respectively. In this study, heat pulses of 7 W/m for 2

  20. Moisture Sorption Isotherms and Properties of Sorbed Water of Neem ( Azadirichta indica A. Juss) Kernels

    Science.gov (United States)

    Ngono Mbarga, M. C.; Bup Nde, D.; Mohagir, A.; Kapseu, C.; Elambo Nkeng, G.

    2017-01-01

    A neem tree growing abundantly in India as well as in some regions of Asia and Africa gives fruits whose kernels have about 40-50% oil. This oil has high therapeutic and cosmetic qualities and is recently projected to be an important raw material for the production of biodiesel. Its seed is harvested at high moisture contents, which leads tohigh post-harvest losses. In the paper, the sorption isotherms are determined by the static gravimetric method at 40, 50, and 60°C to establish a database useful in defining drying and storage conditions of neem kernels. Five different equations are validated for modeling the sorption isotherms of neem kernels. The properties of sorbed water, such as the monolayer moisture content, surface area of adsorbent, number of adsorbed monolayers, and the percent of bound water are also defined. The critical moisture content necessary for the safe storage of dried neem kernels is shown to range from 5 to 10% dry basis, which can be obtained at a relative humidity less than 65%. The isosteric heats of sorption at 5% moisture content are 7.40 and 22.5 kJ/kg for the adsorption and desorption processes, respectively. This work is the first, to the best of our knowledge, to give the important parameters necessary for drying and storage of neem kernels, a potential raw material for the production of oil to be used in pharmaceutics, cosmetics, and biodiesel manufacturing.

  1. Estimation of metabolic heat production and methane emission in Sahiwal and Karan Fries heifers under different feeding regimes

    Directory of Open Access Journals (Sweden)

    Sunil Kumar

    2016-05-01

    Full Text Available Aim: The objective of this study was designed to estimate the metabolic heat production and methane emission in Sahiwal and Karan Fries (Holstein-Friesian X Tharparkar heifers under two different feeding regimes, i.e., feeding regime-1 as per the National Research Council (NRC (2001 and feeding regime-2 having 15% higher energy (supplementation of molasses than NRC (2001. Materials and Methods: Six (n = 6 healthy heifers of Sahiwal and Karan Fries with 18-24 months of age were selected from Indian Council of Agricultural Research-National Dairy Research Institute, Karnal. An initial 15 days was maintained under feeding regime-1 and feeding regime-2 as adaptation period; actual experiment was conducted from 16th day onward for next 15 days. At the end of feeding regimes (on day 15th and 16th, expired air and volume were collected in Douglas bag for two consecutive days (morning [6:00 am] and evening [4:00 pm]. The fraction of methane and expired air volume were measured by methane analyzer and wet test meter, respectively. The oxygen consumption and carbon dioxide production were measured by iWorx LabScribe2. Results: The heat production (kcal/day was significantly (p0.05. The energy loss as methane (% from total heat production was significantly (p<0.05 higher in feeding regime-1. The body weight (kg, metabolic body weight (W0.75, and basal metabolic rate (kcal/kg0.75 were significantly (p<0.05 higher in feeding regime-2 in both breeds. Conclusions: This study indicates that higher energy diet by supplementing molasses may reduce energy loss as methane and enhance the growth of Sahiwal and Karan Fries heifers.

  2. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... Keywords. wheat; kernel dimensions; thousand-kernel weight; conditional QTL mapping; genetic relationship. Journal of Genetics, Vol .... E. 2,. E3 and. E. 4 represent the environments of. 2008–2009 in. T aian,. 2009–2010 in. T aian,. 2009–2010 in. Zaozhuang and. 2009–2010 in. Jining, respectively. c. WJ.

  3. Estimation of the dust production rate from the tungsten armour after repetitive ELM-like heat loads

    Science.gov (United States)

    Pestchanyi, S.; Garkusha, I.; Makhlaj, V.; Landman, I.

    2011-12-01

    Experimental simulations for the erosion rate of tungsten targets under ITER edge-localized mode (ELM)-like surface heat loads of 0.75 MJ m-2 causing surface melting and of 0.45 MJ m-2 without melting have been performed in the QSPA-Kh50 plasma accelerator. Analytical considerations allow us to conclude that for both energy deposition values the erosion mechanism is solid dust ejection during surface cracking under the action of thermo-stress. Tungsten influx into the ITER containment of NW~5×1018 W per medium size ELM of 0.75 MJ m-2 and 0.25 ms time duration has been estimated. The radiation cooling power of Prad=150-300 MW due to such influx of tungsten is intolerable: it should cool the ITER core to 1 keV within a few seconds.

  4. Fatty acids composition as a means to estimate the high heating value (HHV) of vegetable oils and biodiesel fuels

    Energy Technology Data Exchange (ETDEWEB)

    Fassinou, Wanignon Ferdinand; Koua, Kamenan Blaise; Toure, Siaka [Laboratoire d' Energie Solaire, UFR-SSMT, Universite de Cocody (Cote d' Ivoire), 22BP582 Abidjan 22 (Ivory Coast); Sako, Aboubakar; Fofana, Alhassane [Laboratoire de Physique de l' Atmosphere et de Mecanique des Fluides, UFR-SSMT, Universite de Cocody (Cote d' Ivoire), 22BP582 Abidjan 22 (Ivory Coast)

    2010-12-15

    High heating value (HHV) is an important property which characterises the energy content of a fuel such as solid, liquid and gaseous fuels. The previous assertion is particularly important for vegetable oils and biodiesels fuels which are expected to replace fossil oils. Estimation of the HHV of vegetable oils and biodiesels by using their fatty acid composition is the aim of this paper. The comparison between the HHVs predicted by the method and those obtained experimentally gives an average bias error of -0.84% and an average absolute error of 1.71%. These values show the utility, the validity and the applicability of the method to vegetable oils and their derivatives. (author)

  5. Estimating sensible heat exchange between screen-covered canopies and the atmosphere using the surface renewal technique

    Science.gov (United States)

    Mekhmandarov, Yonatan; Achiman, Ori; Pirkner, Moran; Tanny, Josef

    2014-05-01

    Screenhouses and screen-covers are widely used in arid and semi-arid agriculture to protect crops from direct solar radiation and high wind speed, and to increase water use efficiency. However, accurate estimation of crop water use under screens is still a challenge. The most reliable method that directly measures evapotranspiration, the Eddy Covariance (EC), is both expensive and complex in data collection and processing. This renders it unfeasible for day to day use by farmers. A simpler alternative is the Surface Renewal (SR) technique which utilizes high frequency temperature readings of low-cost fine-wire thermocouples, to estimate the sensible heat flux. Assuming energy conservation and employing relatively cheap complementary measurements, the evapotranspiration can be estimated. The SR technique uses a structure function mathematical analysis that filters out noise and involves a time lag parameter to provide amplitude and time period of a ramp-like temperature signal. This behavior arises from the detachment of air parcels that have been heated or cooled near the surface and sequentially renewed by air parcels from above. While the SR technique is relatively simple to employ, it requires calibration against direct measurements. The aim of this research is to investigate the applicability of the SR technique in two different types of commonly used screenhouses in Israel. Two field campaigns were carried out: In the first campaign we studied a banana plantation grown in a shading screenhouse located in the coastal plain of northern Israel. The second campaign was located in the Jordan Valley region of eastern Israel, where a pepper plantation cultivated in an insect-proof screenhouse, with a much denser screen, was examined. In the two campaigns, SR sensible heat flux estimates were calibrated against simultaneous eddy covariance measurements. To optimize the SR operation, in each campaign fine-wire (50-76 μm) exposed T-type thermocouples were placed at

  6. Evaluation of Sensible Heat Flux and Evapotranspiration Estimates Using a Surface Layer Scintillometer and a Large Weighing Lysimeter

    Directory of Open Access Journals (Sweden)

    Jerry E. Moorhead

    2017-10-01

    Full Text Available Accurate estimates of actual crop evapotranspiration (ET are important for optimal irrigation water management, especially in arid and semi-arid regions. Common ET sensing methods include Bowen Ratio, Eddy Covariance (EC, and scintillometers. Large weighing lysimeters are considered the ultimate standard for measurement of ET, however, they are expensive to install and maintain. Although EC and scintillometers are less costly and relatively portable, EC has known energy balance closure discrepancies. Previous scintillometer studies used EC for ground-truthing, but no studies considered weighing lysimeters. In this study, a Surface Layer Scintillometer (SLS was evaluated for accuracy in determining ET as well as sensible and latent heat fluxes, as compared to a large weighing lysimeter in Bushland, TX. The SLS was installed over irrigated grain sorghum (Sorghum bicolor (L. Moench for the period 29 July–17 August 2015 and over grain corn (Zea mays L. for the period 23 June–2 October 2016. Results showed poor correlation for sensible heat flux, but much better correlation with ET, with r2 values of 0.83 and 0.87 for hourly and daily ET, respectively. The accuracy of the SLS was comparable to other ET sensing instruments with an RMSE of 0.13 mm·h−1 (31% for hourly ET; however, summing hourly values to a daily time step reduced the ET error to 14% (0.75 mm·d−1. This level of accuracy indicates that potential exists for the SLS to be used in some water management applications. As few studies have been conducted to evaluate the SLS for ET estimation, or in combination with lysimetric data, further evaluations would be beneficial to investigate the applicability of the SLS in water resources management.

  7. Estimating sap flux densities in date palm trees using the heat dissipation method and weighing lysimeters.

    Science.gov (United States)

    Sperling, Or; Shapira, Or; Cohen, Shabtai; Tripler, Effi; Schwartz, Amnon; Lazarovitch, Naftali

    2012-09-01

    In a world of diminishing water reservoirs and a rising demand for food, the practice and development of water stress indicators and sensors are in rapid progress. The heat dissipation method, originally established by Granier, is herein applied and modified to enable sap flow measurements in date palm trees in the southern Arava desert of Israel. A long and tough sensor was constructed to withstand insertion into the date palm's hard exterior stem. This stem is wide and fibrous, surrounded by an even tougher external non-conducting layer of dead leaf bases. Furthermore, being a monocot species, water flow does not necessarily occur through the outer part of the palm's stem, as in most trees. Therefore, it is highly important to investigate the variations of the sap flux densities and determine the preferable location for sap flow sensing within the stem. Once installed into fully grown date palm trees stationed on weighing lysimeters, sap flow as measured by the modified sensors was compared with the actual transpiration. Sap flow was found to be well correlated with transpiration, especially when using a recent calibration equation rather than the original Granier equation. Furthermore, inducing the axial variability of the sap flux densities was found to be highly important for accurate assessments of transpiration by sap flow measurements. The sensors indicated no transpiration at night, a high increase of transpiration from 06:00 to 09:00, maximum transpiration at 12:00, followed by a moderate reduction until 08:00; when transpiration ceased. These results were reinforced by the lysimeters' output. Reduced sap flux densities were detected at the stem's mantle when compared with its center. These results were reinforced by mechanistic measurements of the stem's specific hydraulic conductivity. Variance on the vertical axis was also observed, indicating an accelerated flow towards the upper parts of the tree and raising a hypothesis concerning dehydrating

  8. Normalizing kernels in the Billera-Holmes-Vogtmann treespace.

    Science.gov (United States)

    Weyenberg, Grady; Yoshida, Ruriko; Howe, Daniel

    2016-05-10

    As costs of genome sequencing have dropped precipitously, development of efficient bioinformatic methods to analyze genome structure and evolution have become ever more urgent. For example, most published phylogenomic studies involve either massive concatenation of sequences, or informal comparisons of phylogenies inferred on a small subset of orthologous genes, neither of which provides a comprehensive overview of evolution or systematic identification of genes with unusual and interesting evolution (e.g. horizontal gene transfers, gene duplication and subsequent neofunctionalization). We are interested in identifying such "outlying" gene trees from the set of gene trees and estimating the distribution of trees over the "tree space". This paper describes an improvement to the KDETREES algorithm, an adaptation of classical kernel density estimation to the metric space of phylogenetic trees (Billera-Holmes-Vogtman treespace), whereby the kernel normalizing constants, are estimated through the use of the novel holonomic gradient methods. As in the original kdetrees paper, we have applied kdetrees to a set of Apicomplexa genes. The analysis identified several unreliable sequence alignments that had escaped previous detection, as well as a gene independently reported as a possible case of horizontal gene transfer. The updated version of the KDETREES software package is available both from CRAN (the official R package system), as well as from the official development repository on Github. (github.com/grady/kdetrees).

  9. Improved kernel correlation filter tracking with Gaussian scale space

    Science.gov (United States)

    Tan, Shukun; Liu, Yunpeng; Li, Yicui

    2016-10-01

    Recently, Kernel Correlation Filter (KCF) has achieved great attention in visual tracking filed, which provide excellent tracking performance and high possessing speed. However, how to handle the scale variation is still an open problem. In this paper, focusing on this issue that a method based on Gaussian scale space is proposed. First, we will use KCF to estimate the location of the target, the context region which includes the target and its surrounding background will be the image to be matched. In order to get the matching image of a Gaussian scale space, image with Gaussian kernel convolution can be gotten. After getting the Gaussian scale space of the image to be matched, then, according to it to estimate target image under different scales. Combine with the scale parameter of scale space, for each corresponding scale image performing bilinear interpolation operation to change the size to simulate target imaging at different scales. Finally, matching the template with different size of images with different scales, use Mean Absolute Difference (MAD) as the match criterion. After getting the optimal matching in the image with the template, we will get the best zoom ratio s, consequently estimate the target size. In the experiments, compare with CSK, KCF etc. demonstrate that the proposed method achieves high improvement in accuracy, is an efficient algorithm.

  10. The Heat Resistance of Microbial Cells Represented by D Values Can be Estimated by the Transition Temperature and the Coefficient of Linear Expansion.

    Science.gov (United States)

    Nakanishi, Koichi; Kogure, Akinori; Deuchi, Keiji; Kuwana, Ritsuko; Takamatsu, Hiromu; Ito, Kiyoshi

    2015-01-01

    We previously developed a method for evaluating the heat resistance of microorganisms by measuring the transition temperature at which the coefficient of linear expansion of a cell changes. Here, we performed heat resistance measurements using a scanning probe microscope with a nano thermal analysis system. The microorganisms studied included six strains of the genus Bacillus or related genera, one strain each of the thermophilic obligate anaerobic bacterial genera Thermoanaerobacter and Moorella, two strains of heat-resistant mold, two strains of non-sporulating bacteria, and one strain of yeast. Both vegetative cells and spores were evaluated. The transition temperature at which the coefficient of linear expansion due to heating changed from a positive value to a negative value correlated strongly with the heat resistance of the microorganism as estimated from the D value. The microorganisms with greater heat resistance exhibited higher transition temperatures. There was also a strong negative correlation between the coefficient of linear expansion and heat resistance in bacteria and yeast, such that microorganisms with greater heat resistance showed lower coefficients of linear expansion. These findings suggest that our method could be useful for evaluating the heat resistance of microorganisms.

  11. Online Sequential Extreme Learning Machine With Kernels.

    Science.gov (United States)

    Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio

    2015-09-01

    The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets.

  12. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo

    2015-11-11

    We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

  13. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  14. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  15. Local Image Descriptors Using Supervised Kernel ICA

    Science.gov (United States)

    Yamazaki, Masaki; Fels, Sidney

    PCA-SIFT is an extension to SIFT which aims to reduce SIFT's high dimensionality (128 dimensions) by applying PCA to the gradient image patches. However PCA is not a discriminative representation for recognition due to its global feature nature and unsupervised algorithm. In addition, linear methods such as PCA and ICA can fail in the case of non-linearity. In this paper, we propose a new discriminative method called Supervised Kernel ICA (SKICA) that uses a non-linear kernel approach combined with Supervised ICA-based local image descriptors. Our approach blends the advantages of supervised learning with nonlinear properties of kernels. Using five different test data sets we show that the SKICA descriptors produce better object recognition performance than other related approaches with the same dimensionality. The SKICA-based representation has local sensitivity, non-linear independence and high class separability providing an effective method for local image descriptors.

  16. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Science.gov (United States)

    Tikk, Domonkos; Thomas, Philippe; Palaga, Peter; Hakenberg, Jörg; Leser, Ulf

    2010-07-01

    The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs) reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study shows that three

  17. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Directory of Open Access Journals (Sweden)

    Domonkos Tikk

    Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study

  18. A Kernel-Based Approach for Biomedical Named Entity Recognition

    Directory of Open Access Journals (Sweden)

    Rakesh Patra

    2013-01-01

    Full Text Available Support vector machine (SVM is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER. The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.

  19. Non-destructive detection of flawed hazelnut kernels and lipid oxidation assessment using NIR spectroscopy

    NARCIS (Netherlands)

    Pannico, A.; Schouten, R.E.; Basile, B.; Woltering, E.J.; Cirillo, C.

    2015-01-01

    Microbial contamination, seed browning, bad taste and lipid oxidation are primary causes of quality deterioration in stored hazelnuts, affecting their marketability. The feasibility of NIR spectroscopy to detect flawed kernels and estimate lipid oxidation in in-shell and shelled hazelnuts was

  20. Cost Sensitive Online Multiple Kernel Classification

    Science.gov (United States)

    2016-11-22

    dataset built from Android Malware Genome Project which is about classifying apps as malware or not. The other details are given in Table 1. Table 1...attempts to classify page blocks into text or not. The anomaly detection datasets are KDD08 (from KDD Cup 2008 dataset on breast cancer); and Malware ...Datasets D7 KDD08 102294 117 163.20 D8 Malware 208243 122 549.91 4.2. Kernels Different kernels are suitable for different types of data. For example

  1. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    Based on work by Pearson in 1901, Hotelling in 1933 introduced principal component analysis (PCA). PCA is often used for general feature generation and linear orthogonalization or compression by dimensionality reduction of correlated multivariate data, see Jolliffe for a comprehensive description...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  2. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    Principal component analysis (PCA) [1] is often used for general feature generation and linear orthogonalization or compression by dimensionality reduction of correlated multivariate data, see Jolliffe [2] for a comprehensive description of PCA and related techniques. Schölkopf et al. [3] introduce...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...

  3. Alumina Concentration Detection Based on the Kernel Extreme Learning Machine

    Science.gov (United States)

    Zhang, Tao; Yin, Yixin; Xiao, Wendong

    2017-01-01

    The concentration of alumina in the electrolyte is of great significance during the production of aluminum. The amount of the alumina concentration may lead to unbalanced material distribution and low production efficiency and affect the stability of the aluminum reduction cell and current efficiency. The existing methods cannot meet the needs for online measurement because industrial aluminum electrolysis has the characteristics of high temperature, strong magnetic field, coupled parameters, and high nonlinearity. Currently, there are no sensors or equipment that can detect the alumina concentration on line. Most companies acquire the alumina concentration from the electrolyte samples which are analyzed through an X-ray fluorescence spectrometer. To solve the problem, the paper proposes a soft sensing model based on a kernel extreme learning machine algorithm that takes the kernel function into the extreme learning machine. K-fold cross validation is used to estimate the generalization error. The proposed soft sensing algorithm can detect alumina concentration by the electrical signals such as voltages and currents of the anode rods. The predicted results show that the proposed approach can give more accurate estimations of alumina concentration with faster learning speed compared with the other methods such as the basic ELM, BP, and SVM. PMID:28862685

  4. Alumina Concentration Detection Based on the Kernel Extreme Learning Machine.

    Science.gov (United States)

    Zhang, Sen; Zhang, Tao; Yin, Yixin; Xiao, Wendong

    2017-09-01

    The concentration of alumina in the electrolyte is of great significance during the production of aluminum. The amount of the alumina concentration may lead to unbalanced material distribution and low production efficiency and affect the stability of the aluminum reduction cell and current efficiency. The existing methods cannot meet the needs for online measurement because industrial aluminum electrolysis has the characteristics of high temperature, strong magnetic field, coupled parameters, and high nonlinearity. Currently, there are no sensors or equipment that can detect the alumina concentration on line. Most companies acquire the alumina concentration from the electrolyte samples which are analyzed through an X-ray fluorescence spectrometer. To solve the problem, the paper proposes a soft sensing model based on a kernel extreme learning machine algorithm that takes the kernel function into the extreme learning machine. K-fold cross validation is used to estimate the generalization error. The proposed soft sensing algorithm can detect alumina concentration by the electrical signals such as voltages and currents of the anode rods. The predicted results show that the proposed approach can give more accurate estimations of alumina concentration with faster learning speed compared with the other methods such as the basic ELM, BP, and SVM.

  5. Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    For management and trading purposes, information on short-term wind generation (from few hours to few days ahead) is even more crucial at large offshore wind farms, since they concentrate a large capacity at a single location. The most complete information that can be provided today consists....... The obtained ensemble forecasts of wind power are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which are recursively estimated in order to maximize the overall skill of obtained...

  6. Solving a Volterra integral equation with weakly singular kernel in the reproducing kernel space

    Directory of Open Access Journals (Sweden)

    Fazhan Geng

    2010-06-01

    Full Text Available In this paper, we will present a new method for a Volterra integralequation with weakly singular kernel in the reproducing kernel space. Firstly the equation is transformed into a new equivalent equation. Its exact solution is represented in the form of series in the reproducing kernel space. In the mean time, the n-term approximation $u_{n}(t$ to the exact solution $u(t$ is obtained. Some numerical examples are studied to demonstrate the accuracy of the present method. Results obtained by the method are compared with the exact solution of each example and are found to be in good agreement with each other.

  7. A statistical method for estimating wood thermal diffusivity and probe geometry using in situ heat response curves from sap flow measurements.

    Science.gov (United States)

    Chen, Xingyuan; Miller, Gretchen R; Rubin, Yoram; Baldocchi, Dennis D

    2012-12-01

    The heat pulse method is widely used to measure water flux through plants; it works by using the speed at which a heat pulse is propagated through the system to infer the velocity of water through a porous medium. No systematic, non-destructive calibration procedure exists to determine the site-specific parameters necessary for calculating sap velocity, e.g., wood thermal diffusivity and probe spacing. Such parameter calibration is crucial to obtain the correct transpiration flux density from the sap flow measurements at the plant scale and subsequently to upscale tree-level water fluxes to canopy and landscape scales. The purpose of this study is to present a statistical framework for sampling and simultaneously estimating the tree's thermal diffusivity and probe spacing from in situ heat response curves collected by the implanted probes of a heat ratio measurement device. Conditioned on the time traces of wood temperature following a heat pulse, the parameters are inferred using a Bayesian inversion technique, based on the Markov chain Monte Carlo sampling method. The primary advantage of the proposed methodology is that it does not require knowledge of probe spacing or any further intrusive sampling of sapwood. The Bayesian framework also enables direct quantification of uncertainty in estimated sap flow velocity. Experiments using synthetic data show that repeated tests using the same apparatus are essential for obtaining reliable and accurate solutions. When applied to field conditions, these tests can be obtained in different seasons and can be automated using the existing data logging system. Empirical factors are introduced to account for the influence of non-ideal probe geometry on the estimation of heat pulse velocity, and are estimated in this study as well. The proposed methodology may be tested for its applicability to realistic field conditions, with an ultimate goal of calibrating heat ratio sap flow systems in practical applications.

  8. Evaluation of Multiple Kernel Learning Algorithms for Crop Mapping Using Satellite Image Time-Series Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2017-09-01

    Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.

  9. KERNEL-BASED UNSUPERVISED CHANGE DETECTION OF AGRICULTURAL LANDS USING MULTI-TEMPORAL POLARIMETRIC SAR DATA

    Directory of Open Access Journals (Sweden)

    M. A. Fazel

    2013-09-01

    Full Text Available Unsupervised change detection of agricultural lands in seasonal and annual periods is necessary for farming activities and yield estimation. Polarimetric Synthetic Aperture Radar (PolSAR data due to their special characteristics are a powerful source to study temporal behaviour of land cover types. PolSAR data allows building up the powerful observations sensitive to the shape, orientation and dielectric properties of scatterers and allows the development of physical models for identification and separation of scattering mechanisms occurring inside the same region of observed lands. In this paper an unsupervised kernel-based method is introduced for agricultural change detection by PolSAR data. This method works by transforming data into higher dimensional space by kernel functions and clustering them in this space. Kernel based c-means clustering algorithm is employed to separate the changes classes from the no-changes. This method is a non-linear algorithm which considers the contextual information of observations. Using the kernel functions helps to make the non-linear features more separable in a linear space. In addition, use of eigenvectors' parameters as a polarimetric target decomposition technique helps us to consider and benefit physical properties of targets in the PolSAR change detection. Using kernel based c-means clustering with proper initialization of the algorithm makes this approach lead to great results in change detection paradigm.

  10. Physicochemical characteristics of kernel during fruit maturation of ...

    African Journals Online (AJOL)

    USER

    2010-04-05

    . At full maturity, coconuts consist of an average of 33% husk, 16% shell, 33% kernel and 18% coconut water. (Konan, 1997). Dried mature coconut kernel, known as copra, contains 6% moisture and is one of the main coco-.

  11. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  12. An Analysis of Three Kernel-based Multilevel Security Architectures

    National Research Council Canada - National Science Library

    Levin, Timothy E; Irvine, Cynthia E; Nguyen, Thuy D

    2006-01-01

    ...). This paper provides an analysis of the relative merits of three architectural types one based on a traditional separation kernel, another based on a security kernel, and a third based on a high...

  13. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite......This paper introduces kernel versions of maximum autocorrelation factor (MAF) analysis and minimum noise fraction (MNF) analysis. The kernel versions are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only......) dimensional feature space via the kernel function and then performing a linear analysis in that space. Three examples show the very successful application of kernel MAF/MNF analysis to 1) change detection in DLR 3K camera data recorded 0.7 seconds apart over a busy motorway, 2) change detection...

  14. Estimation and monitoring heat discharge rates using Landsat ETM+ thermal infrared data: a case study in Unzen geothermal field, Kyushu, Japan

    Science.gov (United States)

    Mia, Md. B.; Fujimitsu, Yasuhiro; Bromely, Chris J.

    2012-10-01

    The Unzen geothermal field, our study area is active fumaroles, situated in Shimabara Peninsula of Kyushu Island in Japan. Our prime objectives were (1) to estimate radiative heat flux (RHF), (2) to calculate approximately heat discharge rate (HDR) using the relationship of radiative heat flux with the total heat loss derived from two geothermal field studies and (3) finally, to monitor RHF as well as HDR in our study area using seven sets of Landsat 7 ETM+ images from 2000 to 2009. We used the NDVI (Normalized differential vegetation index) method for spectral emissivity estimation, the mono-window algorithm for land surface temperature (LST) and the Stefan-Boltzmann equation analyzing those satellite TIR images for RHF. We obtained a desired strong correlation of LST above ambient with RHF using random samples. We estimated that the maximum RHF was about 251 W/m2 in 2005 and minimum was about 27 W/m2 in 2001. The highest total RHF was about 39.1 MW in 2005 and lowest was about 12 MW in 2001 in our study region. We discovered that the estimated RHF was about 15.7 % of HDR from our studies. We applied this percentage to estimate heat discharge rate in Unzen geothermal area. The monitoring results showed a single fold trend of HDR from 2000 to 2009 with highest about 252 MW in 2005 and lowest about 78 MW in 2001. In conclusion, TIR remote sensing is thought as the best option for monitoring heat losses from fumaroles with high efficiency and low cost.

  15. The heat kernel and Hardy's theorem on symmetric spaces of ...

    Indian Academy of Sciences (India)

    Author Affiliations. E K Narayanan1 2 S K Ray3. Stat-Math Unit, Indian Statistical Institute, 8th Mile Mysore Road, Bangalore 560 059, India; Math. and Comp. Sci. Dept., Bar-Ilan University, 52900 Ramat-Gan, Israel. Department of Mathematics, Indian Institute of Technology, Kanpur, Kanpur 208 016, India ...

  16. Heat kernel for Newton-Cartan trace anomalies

    Energy Technology Data Exchange (ETDEWEB)

    Auzzi, Roberto [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore, Via Musei 41, Brescia, 25121 (Italy); INFN Sezione di Perugia, Via A. Pascoli, Perugia, 06123 (Italy); Nardelli, Giuseppe [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore, Via Musei 41, Brescia, 25121 (Italy); TIFPA - INFN, Università di Trento,c/o Dipartimento di Fisica, Povo, TN, 38123 (Italy)

    2016-07-11

    We compute the leading part of the trace anomaly for a free non-relativistic scalar in 2+1 dimensions coupled to a background Newton-Cartan metric. The anomaly is proportional to 1/m, where m is the mass of the scalar. We comment on the implications of a conjectured a-theorem for non-relativistic theories with boost invariance.

  17. Cowling–Price theorem and characterization of heat kernel on ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    [6] Bonami A, Demange B and Jaming P, Hermite functions and uncertainty principles for the Fourier and windowed Fourier transfroms, paratre dans Revista Ibero Americana. [7] Cowling M G and Price J F, Generalizations of Heisenberg's inequality, in: Harmonic analysis (eds) G Mauceri, F Ricci, G Weiss (1983) LNM, No.

  18. The heat kernel and Hardy's theorem on symmetric spaces of ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    that is, the Plancherel measure is of at most polynomial growth. Let G/K be the Riemannian symmetric space equipped with a G-invariant Riemannian metric and the Laplace–Beltrami operator on G/K. Then there exists a unique family of smooth function ht,t > 0, with the following properties: (i) ht is K-biinvariant, for each t ...

  19. One Point Isometric Matching with the Heat Kernel

    KAUST Repository

    Ovsjanikov, Maks

    2010-09-21

    A common operation in many geometry processing algorithms consists of finding correspondences between pairs of shapes by finding structure-preserving maps between them. A particularly useful case of such maps is isometries, which preserve geodesic distances between points on each shape. Although several algorithms have been proposed to find approximately isometric maps between a pair of shapes, the structure of the space of isometries is not well understood. In this paper, we show that under mild genericity conditions, a single correspondence can be used to recover an isometry defined on entire shapes, and thus the space of all isometries can be parameterized by one correspondence between a pair of points. Perhaps surprisingly, this result is general, and does not depend on the dimensionality or the genus, and is valid for compact manifolds in any dimension. Moreover, we show that both the initial correspondence and the isometry can be recovered efficiently in practice. This allows us to devise an algorithm to find intrinsic symmetries of shapes, match shapes undergoing isometric deformations, as well as match partial and incomplete models efficiently. Journal compilation © 2010 The Eurographics Association and Blackwell Publishing Ltd.

  20. Review and Comparison of Kernel Based Fuzzy Image Segmentation Techniques

    OpenAIRE

    Prabhjot Kaur; Pallavi Gupta; Poonam Sharma

    2012-01-01

    This paper presents a detailed study and comparison of some Kernelized Fuzzy C-means Clustering based image segmentation algorithms Four algorithms have been used Fuzzy Clustering, Fuzzy C-Means(FCM) algorithm, Kernel Fuzzy C-Means(KFCM), Intuitionistic Kernelized Fuzzy C-Means(KIFCM), Kernelized Type-II Fuzzy C-Means(KT2FCM).The four algorithms are studied and analyzed both quantitatively and qualitatively. These algorithms are implemented on synthetic images in case of without noise along ...

  1. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  2. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    OpenAIRE

    Mohammed D. ABDULMALIK; Shafi’i M. ABDULHAMID

    2008-01-01

    Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws) that ...

  3. Directed acyclic graph kernels for structural RNA analysis

    OpenAIRE

    Mituyama Toutai; Sato Kengo; Asai Kiyoshi; Sakakibara Yasubumi

    2008-01-01

    Abstract Background Recent discoveries of a large variety of important roles for non-coding RNAs (ncRNAs) have been reported by numerous researchers. In order to analyze ncRNAs by kernel methods including support vector machines, we propose stem kernels as an extension of string kernels for measuring the similarities between two RNA sequences from the viewpoint of secondary structures. However, applying stem kernels directly to large data sets of ncRNAs is impractical due to their computation...

  4. A class of kernel based real-time elastography algorithms.

    Science.gov (United States)

    Kibria, Md Golam; Hasan, Md Kamrul

    2015-08-01

    In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Spatio-temporal reconstruction of air temperature maps and their application to estimate rice growing season heat accumulation using multi-temporal MODIS data*

    Science.gov (United States)

    Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen

    2013-01-01

    The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T a) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T a estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T a based on MODIS land surface temperature (LST) data. The verification results of maximum T a, minimum T a, GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001–2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale. PMID:23365013

  6. Localized Multiple Kernel Learning A Convex Approach

    Science.gov (United States)

    2016-11-22

    Computational Biology, 4, 2008. Stephen Poythress Boyd and Lieven Vandenberghe. Convex optimization. Cambridge Univ . Press, New York, 2004. Colin Campbell...2011. Gert RG Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I Jordan. Learning the kernel matrix with semidefinite

  7. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  8. A synthesis of empirical plant dispersal kernels

    Czech Academy of Sciences Publication Activity Database

    Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.

    2017-01-01

    Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016

  9. Flexible Scheduling in Multimedia Kernels: An Overview

    NARCIS (Netherlands)

    Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.

    1999-01-01

    Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more

  10. 42 Variability Bugs in the Linux Kernel

    DEFF Research Database (Denmark)

    Abal, Iago; Brabrand, Claus; Wasowski, Andrzej

    2014-01-01

    , serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 42 variability bugs collected from bug-fixing commits to the Linux kernel repository. We analyze each of the bugs, and record the results in a database. In addition, we...

  11. ACUTE AND SUBCHRONIC TOXICITY STUDIES OF KERNEL ...

    African Journals Online (AJOL)

    Administrator

    1Department of Pure and Applied Chemistry,. 2Department of ... Therefore, this paper reports the evaluation of the safety of seed kernel extract of the .... signs of renal failure (Hassan et al., 2005). ... Medical laboratory manual for tropical countries. ... February, 2011 from www.oecd.org/dataoecd/17/51/1948378.pdf. Ojewole ...

  12. Analytic properties of the Virasoro modular kernel

    Energy Technology Data Exchange (ETDEWEB)

    Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)

    2017-06-15

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)

  13. Structural operational semantics for Kernel Andorra Prolog

    NARCIS (Netherlands)

    S. Haridi (Seif); C. Palamidessi (Catuscia)

    1991-01-01

    textabstractKernel Andorra Prolog is a framework for nondeterministic concurrent constraint logic programming languages. Many languages, such as Prolog, GHC, Parlog, and Atomic Herbrand, can be seen as instances of this framework, by adding specific constraint systems and constraint operations, and

  14. Convolution kernels for multi-wavelength imaging

    Science.gov (United States)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  15. The scalar field kernel in cosmological spaces

    NARCIS (Netherlands)

    Koksma, J.F.; Prokopec, T.|info:eu-repo/dai/nl/326113398; Rigopoulos, G.I.

    2008-01-01

    We construct the quantum mechanical evolution operator in the Functional Schrodinger picture - the kernel - for a scalar field in spatially homogeneous FLRW spacetimes when the field is a) free and b) coupled to a spacetime dependent source term. The essential element in the construction is the

  16. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  17. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  18. A compact kernel for the calculus of inductive constructions

    Indian Academy of Sciences (India)

    distributed XML repository of objects respecting the format of the new kernel. The second one is a wrapper around the library of the old kernel implementation. Every time an old object is requested, we type-check it using the old kernel and we translate it to the new format. We also exploit memoization to avoid translating the ...

  19. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...

  20. Structured functional additive regression in reproducing kernel Hilbert spaces.

    Science.gov (United States)

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  1. Application of the Cercignani-Lampis scattering kernel to channel gas flows

    Science.gov (United States)

    Sharipov, Felix

    2001-08-01

    The Poiseuille flow, the thermal creep and the heat flux between two parallel plates are calculated applying the S model of the Boltzmann equation and the Cercignani-Lampis scattering kernel. The calculations have been carried out in wide ranges of the rarefaction parameter and of the accommodation coefficients of momentum and energy. Comparing the present results with experimental data the value of the accommodation coefficients can be calculated.

  2. A comparison of daily water use estimates derived from constant-heat sap-flow probe values and gravimetric measurements in pot-grown saplings.

    Science.gov (United States)

    K.A. McCulloh; K. Winter; F.C. Meinzer; M. Garcia; J. Aranda; Lachenbruch B.

    2007-01-01

    The use of Granier-style heat dissipation sensors to measure sap flow is common in plant physiology, ecology, and hydrology. There has been concern that any change to the original Granier design invalidates the empirical relationship between sap flux density and the temperature difference between the probes. We compared daily water use estimates from gravimetric...

  3. Evaluation of three energy balance-based evaporation models for estimating monthly evaporation for five lakes using derived heat storage changes from a hysteresis model

    NARCIS (Netherlands)

    Duan, Z.; Bastiaanssen, W.G.M.

    2017-01-01

    The heat storage changes (Qt) can be a significant component of the energy balance in lakes, and it is important to account for Qt for reasonable estimation of evaporation at monthly and finer timescales if the energy balance-based evaporation models are used. However, Qt has been often neglected in

  4. Kolkhoung (Pistacia khinjuk Hull Oil and Kernel Oil as Antioxidative Vegetable Oils with High Oxidative Stability and Nutritional Value

    Directory of Open Access Journals (Sweden)

    Maryam Asnaashari

    2015-01-01

    Full Text Available In this study, in order to introduce natural antioxidative vegetable oil in food industry, the kolkhoung hull oil and kernel oil were extracted. To evaluate their antioxidant efficiency, gas chromatography analysis of the composition of kolkhoung hull and kernel oil fatty acids and high–performance liquid chromatography analysis of tocopherols were done. Also, the oxidative stability of the oil was considered based on the peroxide value and anisidine value during heating at 100, 110 and 120 °C. Gas chromatography analysis showed that oleic acid was the major fatty acid of both types of oil (hull and kernel and based on a low content of saturated fatty acids, high content of monounsaturated fatty acids, and the ratio of ω-6 and ω-3 polyunsaturated fatty acids, they were nutritionally well-balanced. Moreover, both hull and kernel oil showed high oxidative stability during heating, which can be attributed to high content of tocotrienols. Based on the results, kolkhoung hull oil acted slightly better than its kernel oil. However, both of them can be added to oxidation–sensitive oils to improve their shelf life.

  5. L-Kuramoto-Sivashinsky SPDEs in one-to-three dimensions: L-KS kernel, sharp Hölder regularity, and Swift-Hohenberg law equivalence

    Science.gov (United States)

    Allouba, Hassan

    2015-12-01

    Generalizing the L-Kuramoto-Sivashinsky (L-KS) kernel from our earlier work, we give a novel explicit-kernel formulation useful for a large class of fourth order deterministic, stochastic, linear, and nonlinear PDEs in multispatial dimensions. These include pattern formation equations like the Swift-Hohenberg and many other prominent and new PDEs. We first establish existence, uniqueness, and sharp dimension-dependent spatio-temporal Hölder regularity for the canonical (zero drift) L-KS SPDE, driven by white noise on {R+×Rd}d=13. The spatio-temporal Hölder exponents are exactly the same as the striking ones we proved for our recently introduced Brownian-time Brownian motion (BTBM) stochastic integral equation, associated with time-fractional PDEs. The challenge here is that, unlike the positive BTBM density, the L-KS kernel is the Gaussian average of a modified, highly oscillatory, and complex Schrödinger propagator. We use a combination of harmonic and delicate analysis to get the necessary estimates. Second, attaching order parameters ε1 to the L-KS spatial operator and ε2 to the noise term, we show that the dimension-dependent critical ratio ε2 /ε1d/8 controls the limiting behavior of the L-KS SPDE, as ε1 ,ε2 ↘ 0; and we compare this behavior to that of the less regular second order heat SPDEs. Finally, we give a change-of-measure equivalence between the canonical L-KS SPDE and nonlinear L-KS SPDEs. In particular, we prove uniqueness in law for the Swift-Hohenberg and the law equivalence-and hence the same Hölder regularity-of the Swift-Hohenberg SPDE and the canonical L-KS SPDE on compacts in one-to-three dimensions.

  6. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  7. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of principal component analysis (PCA) and minimum noise fraction (MNF) analysis are applied to change detection in hyperspectral image (HyMap) data. The kernel versions are based on so-called Q-mode analysis in which the data enter into the analysis via inner products in the Gram...... the kernel function and then performing a linear analysis in that space. An example shows the successful application of (kernel PCA and) kernel MNF analysis to change detection in HyMap data covering a small agricultural area near Lake Waging-Taching, Bavaria, in Southern Germany. In the change detection...

  8. Selection and properties of alternative forming fluids for TRISO fuel kernel production

    Science.gov (United States)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.

    2013-01-01

    Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.

  9. Estimation of non-linear continuous time models for the heat exchange dynamics of building integrated photovoltaic modules

    DEFF Research Database (Denmark)

    Jimenez, M.J.; Madsen, Henrik; Bloem, J.J.

    2008-01-01

    heat interchanges are non-linear effects and represent significant contributions in a variety of components such as photovoltaic integrated facades or roofs and those using these effects as passive cooling strategies, etc. Since models are approximations of the physical system and data is encumbered...... that a description of the non-linear heat transfer is essential. The resulting model is a non-linear first order stochastic differential equation for the heat transfer of the PV component....

  10. Estimation of Joule heating and its role in nonlinear electrical response of Tb0.5Sr0.5MnO3 single crystal

    Science.gov (United States)

    Nhalil, Hariharan; Elizabeth, Suja

    2016-12-01

    Highly non-linear I-V characteristics and apparent colossal electro-resistance were observed in non-charge ordered manganite Tb0.5Sr0.5MnO3 single crystal in low temperature transport measurements. Significant changes were noticed in top surface temperature of the sample as compared to its base while passing current at low temperature. By analyzing these variations, we realize that the change in surface temperature (ΔTsur) is too small to have caused by the strong negative differential resistance. A more accurate estimation of change in the sample temperature was made by back-calculating the sample temperature from the temperature variation of resistance (R-T) data (ΔTcal), which was found to be higher than ΔTsur. This result indicates that there are large thermal gradients across the sample. The experimentally derived ΔTcal is validated with the help of a simple theoretical model and estimation of Joule heating. Pulse measurements realize substantial reduction in Joule heating. With decrease in sample thickness, Joule heating effect is found to be reduced. Our studies reveal that Joule heating plays a major role in the nonlinear electrical response of Tb0.5Sr0.5MnO3. By careful management of the duty cycle and pulse current I-V measurements, Joule heating can be mitigated to a large extent.

  11. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis......) dimensional feature space via the kernel function and then performing a linear analysis in that space. An example shows the successful application of kernel MAF analysis to change detection in HyMap data covering a small agricultural area near Lake Waging-Taching, Bavaria, Germany....... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...

  12. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    We consider kernel methods on general geodesic metric spaces and provide both negative and positive results. First we show that the common Gaussian kernel can only be generalized to a positive definite kernel on a geodesic metric space if the space is flat. As a result, for data on a Riemannian...... manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...... Laplacian kernel can be generalized while retaining positive definiteness. This implies that geodesic Laplacian kernels can be generalized to some curved spaces, including spheres and hyperbolic spaces. Our theoretical results are verified empirically....

  13. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    Science.gov (United States)

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ2 kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ2 kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ2 kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ2 multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ2 kernel SVMs at almost no cost of testing accuracy.

  14. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear...... models to kernel learning, and means for restoring the generalizability in both kernel Principal Component Analysis and the Support Vector Machine are proposed. Viability is proved on a wide range of benchmark machine learning data sets....... as innerproducts in the model formulation. This dissertation presents research on improving the performance of standard kernel methods like kernel Principal Component Analysis and the Support Vector Machine. Moreover, the goal of the thesis has been two-fold. The first part focuses on the use of kernel Principal...

  15. Metabolic power of European starlings Sturnus vulgaris during flight in a wind tunnel, estimated from heat transfer modelling, doubly labelled water and mask respirometry.

    Science.gov (United States)

    Ward, S; Möller, U; Rayner, J M V; Jackson, D M; Nachtigall, W; Speakman, J R

    2004-11-01

    It is technically demanding to measure the energetic cost of animal flight. Each of the previously available techniques has some disadvantage as well advantages. We compared measurements of the energetic cost of flight in a wind tunnel by four European starlings Sturnus vulgaris made using three independent techniques: heat transfer modelling, doubly labelled water (DLW) and mask respirometry. We based our heat transfer model on thermal images of the surface temperature of the birds and air flow past the body and wings calculated from wing beat kinematics. Metabolic power was not sensitive to uncertainty in the value of efficiency when estimated from heat transfer modelling. A change in the assumed value of whole animal efficiency from 0.19 to 0.07 (the range of estimates in previous studies) only altered metabolic power predicted from heat transfer modelling by 13%. The same change in the assumed value of efficiency would cause a 2.7-fold change in metabolic power if it were predicted from mechanical power. Metabolic power did not differ significantly between measurements made using the three techniques when we assumed an efficiency in the range 0.11-0.19, although the DLW results appeared to form a U-shaped power-speed curve while the heat transfer model and respirometry results increased linearly with speed. This is the first time that techniques for determining metabolic power have been compared using data from the same birds flying under the same conditions. Our data provide reassurance that all the techniques produce similar results and suggest that heat transfer modelling may be a useful method for estimating metabolic rate.

  16. Nonparametric evaluation of dynamic disease risk: a spatio-temporal kernel approach.

    Directory of Open Access Journals (Sweden)

    Zhijie Zhang

    Full Text Available Quantifying the distributions of disease risk in space and time jointly is a key element for understanding spatio-temporal phenomena while also having the potential to enhance our understanding of epidemiologic trajectories. However, most studies to date have neglected time dimension and focus instead on the "average" spatial pattern of disease risk, thereby masking time trajectories of disease risk. In this study we propose a new idea titled "spatio-temporal kernel density estimation (stKDE" that employs hybrid kernel (i.e., weight functions to evaluate the spatio-temporal disease risks. This approach not only can make full use of sample data but also "borrows" information in a particular manner from neighboring points both in space and time via appropriate choice of kernel functions. Monte Carlo simulations show that the proposed method performs substantially better than the traditional (i.e., frequency-based kernel density estimation (trKDE which has been used in applied settings while two illustrative examples demonstrate that the proposed approach can yield superior results compared to the popular trKDE approach. In addition, there exist various possibilities for improving and extending this method.

  17. Modelling of the control of heart rate by breathing using a kernel method.

    Science.gov (United States)

    Ahmed, A K; Fakhouri, S Y; Harness, J B; Mearns, A J

    1986-03-07

    The process of the breathing (input) to the heart rate (output) of man is considered for system identification by the input-output relationship, using a mathematical model expressed as integral equations. The integral equation is considered and fixed so that the identification method reduces to the determination of the values within the integral, called kernels, resulting in an integral equation whose input-output behaviour is nearly identical to that of the system. This paper uses an algorithm of kernel identification of the Volterra series which greatly reduces the computational burden and eliminates the restriction of using white Gaussian input as a test signal. A second-order model is the most appropriate for a good estimate of the system dynamics. The model contains the linear part (first-order kernel) and quadratic part (second-order kernel) in parallel, and so allows for the possibility of separation between the linear and non-linear elements of the process. The response of the linear term exhibits the oscillatory input and underdamped nature of the system. The application of breathing as input to the system produces an oscillatory term which may be attributed to the nature of sinus node of the heart being sensitive to the modulating signal the breathing wave. The negative-on diagonal seems to cause the dynamic asymmetry of the total response of the system which opposes the oscillatory nature of the first kernel related to the restraining force present in the respiratory heart rate system. The presence of the positive-off diagonal of the second-order kernel of respiratory control of heart rate is an indication of an escape-like phenomenon in the system.

  18. Combining in situ measurements and altimetry to estimate volume, heat and salt transport variability through the Faroe–Shetland Channel

    Directory of Open Access Journals (Sweden)

    B. Berx

    2013-07-01

    Full Text Available From 1994 to 2011, instruments measuring ocean currents (Acoustic Doppler Current Profilers; ADCPs have been moored on a section crossing the Faroe–Shetland Channel. Together with CTD (Conductivity Temperature Depth measurements from regular research vessel occupations, they describe the flow field and water mass structure in the channel. Here, we use these data to calculate the average volume transport and properties of the flow of warm water through the channel from the Atlantic towards the Arctic, termed the Atlantic inflow. We find the average volume transport of this flow to be 2.7 ± 0.5 Sv (1 Sv = 106 m3 s–1 between the shelf edge on the Faroe side and the 150 m isobath on the Shetland side. The average heat transport (relative to 0 °C was estimated to be 107 ± 21 TW (1 TW = 1012 W and the average salt import to be 98 ± 20 × 106 kg s−1. Transport values for individual months, based on the ADCP data, include a large level of variability, but can be used to calibrate sea level height data from satellite altimetry. In this way, a time series of volume transport has been generated back to the beginning of satellite altimetry in December 1992. The Atlantic inflow has a seasonal variation in volume transport that peaks around the turn of the year and has an amplitude of 0.7 Sv. The Atlantic inflow has become warmer and more saline since 1994, but no equivalent trend in volume transport was observed.

  19. A statistical method for estimating wood thermal diffusivity and probe geometry using in situ heat response curves from sap flow measurements

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingyuan; Miller, Gretchen R.; Rubin, Yoram; Baldocchi, Dennis

    2012-09-13

    The heat pulse method is widely used to measure water flux through plants; it works by inferring the velocity of water through a porous medium from the speed at which a heat pulse is propagated through the system. No systematic, non-destructive calibration procedure exists to determine the site-specific parameters necessary for calculating sap velocity, e.g., wood thermal diffusivity and probe spacing. Such parameter calibration is crucial to obtain the correct transpiration flux density from the sap flow measurements at the plant scale; and consequently, to up-scale tree-level water fluxes to canopy and landscape scales. The purpose of this study is to present a statistical framework for estimating the wood thermal diffusivity and probe spacing simutaneously from in-situ heat response curves collected by the implanted probes of a heat ratio apparatus. Conditioned on the time traces of wood temperature following a heat pulse, the parameters are inferred using a Bayesian inversion technique, based on the Markov chain Monte Carlo sampling method. The primary advantage of the proposed methodology is that it does not require known probe spacing or any further intrusive sampling of sapwood. The Bayesian framework also enables direct quantification of uncertainty in estimated sap flow velocity. Experiments using synthetic data show that repeated tests using the same apparatus are essential to obtain reliable and accurate solutions. When applied to field conditions, these tests are conducted during different seasons and automated using the existing data logging system. The seasonality of wood thermal diffusivity is obtained as a by-product of the parameter estimation process, and it is shown to be affected by both moisture content and temperature. Empirical factors are often introduced to account for the influence of non-ideal probe geometry on the estimation of heat pulse velocity, and they are estimated in this study as well. The proposed methodology can be applied for

  20. Wilson Dslash Kernel From Lattice QCD Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  1. Estimation of the effective heating systems radius as a method of the reliability improving and energy efficiency

    Science.gov (United States)

    Akhmetova, I. G.; Chichirova, N. D.

    2017-11-01

    When conducting an energy survey of heat supply enterprise operating several boilers located not far from each other, it is advisable to assess the degree of heat supply efficiency from individual boiler, the possibility of energy consumption reducing in the whole enterprise by switching consumers to a more efficient source, to close in effective boilers. It is necessary to consider the temporal dynamics of perspective load connection, conditions in the market changes. To solve this problem the radius calculation of the effective heat supply from the thermal energy source can be used. The disadvantage of existing methods is the high complexity, the need to collect large amounts of source data and conduct a significant amount of computational efforts. When conducting an energy survey of heat supply enterprise operating a large number of thermal energy sources, rapid assessment of the magnitude of the effective heating radius requires. Taking into account the specifics of conduct and objectives of the energy survey method of calculation of effective heating systems radius, to use while conducting the energy audit should be based on data available heat supply organization in open access, minimize efforts, but the result should be to match the results obtained by other methods. To determine the efficiency radius of Kazan heat supply system were determined share of cost for generation and transmission of thermal energy, capital investment to connect new consumers. The result were compared with the values obtained with the previously known methods. The suggested Express-method allows to determine the effective radius of the centralized heat supply from heat sources, in conducting energy audits with the effort minimum and the required accuracy.

  2. Towards More Comprehensive Projections of Urban Heat-Related Mortality: Estimates for New York City Under Multiple Population, Adaptation, and Climate Scenarios

    Science.gov (United States)

    Petkova, Elisaveta P.; Vink, Jan K.; Horton, Radley M.; Gasparrini, Antonio; Bader, Daniel A.; Francis, Joe D.; Kinney, Patrick L.

    2016-01-01

    High temperatures have substantial impacts on mortality and, with growing concerns about climate change, numerous studies have developed projections of future heat-related deaths around the world. Projections of temperature-related mortality are often limited by insufficient information necessary to formulate hypotheses about population sensitivity to high temperatures and future demographics. This study has derived projections of temperature-related mortality in New York City by taking into account future patterns of adaptation or demographic change, both of which can have profound influences on future health burdens. We adopt a novel approach to modeling heat adaptation by incorporating an analysis of the observed population response to heat in New York City over the course of eight decades. This approach projects heat-related mortality until the end of the 21st century based on observed trends in adaptation over a substantial portion of the 20th century. In addition, we incorporate a range of new scenarios for population change until the end of the 21st century. We then estimate future heat-related deaths in New York City by combining the changing temperature-mortality relationship and population scenarios with downscaled temperature projections from the 33 global climate models (GCMs) and two Representative Concentration Pathways (RCPs).The median number of projected annual heat-related deaths across the 33 GCMs varied greatly by RCP and adaptation and population change scenario, ranging from 167 to 3331 in the 2080s compared to 638 heat-related deaths annually between 2000 and 2006.These findings provide a more complete picture of the range of potential future heat-related mortality risks across the 21st century in New York, and highlight the importance of both demographic change and adaptation responses in modifying future risks.

  3. Searching and Indexing Genomic Databases via Kernelization

    Directory of Open Access Journals (Sweden)

    Travis eGagie

    2015-02-01

    Full Text Available The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper we survey the twenty-year history of this idea and discuss its relation to kernelization in parameterized complexity.

  4. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF......). The MAF projection exploits the fact that interesting phenomena in images typically exhibit spatial autocorrelation. The analysis is based on nearinfrared hyperspectral images of maize grains demonstrating the superiority of the kernelbased MAF method....

  5. A Meshsize Boosting Algorithm In Kernel Density Estimation ...

    African Journals Online (AJOL)

    KDE). This algorithm enjoys the property of a bias reduction technique like other existing boosting algorithms and also enjoys the property of less function evaluations when compared with other boosting schemes. Numerical examples are used ...

  6. Asymptotic Estimates for Resolvents of Some Nonintegrable Volterra Kernels.

    Science.gov (United States)

    1984-03-01

    R R R 2 where the use of Fubinis theorem is allowed am (2.24) dv ) 0, v(M) ( M and by the fact that e-ztb(t) e Ll(R+) for Re z > 0. Write Z = 0...addition a few coments are given. In Sections 2, 3 and 4 we prove Theorems 1, 2 and 3 respectively. Finally in Section 5 we prove an auxiliary result...needed in the proof of Theorem 1. A preliminary version of Theorem I was given in [5). THEOREM 1. Let 1(x) be a nondecreasing function defined for x ) 0

  7. A Fast Reduced Kernel Extreme Learning Machine.

    Science.gov (United States)

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  9. Attempts of Thermal Imaging Camera Usage in Estimations of the Convective Heat Loss From a Vertical Plate

    Directory of Open Access Journals (Sweden)

    Denda Hubert

    2014-01-01

    Full Text Available In this paper a new method for determining heat transfer coefficients using a gradient method has been developed. To verify accuracy of the proposed method vertical isothermal heating plate with natural convection mechanism has been examined. This configuration was deliberately chosen, because of the fact that such case is historically the earliest and most thoroughly studied and its rich scientific documentation – the most reliable. New method is based on temperature field visualization made in perpendicular plane to the heating surface of the plate using infrared camera. Because the camera does not record temperature of air itself but the surface only, therefore plastic mesh with low thermal conductivity has been used as a detector. Temperature of each mesh cell, placed perpendicular to the vertical heating surface and rinsed with convection stream of heated air could be already recorded by infrared camera. In the same time using IR camera surface of heating plate has been measured. By numerical processing of the results matrix temperature gradient on the surface ∂T/∂x │ x=0, local heat transfer coefficients αy, and local values of Nusselt number Nuy, can be calculated. After integration the average Nusselt number for entire plate can be calculated. Obtained relation characteristic numbers Nu = 0.647 Ra 0.236 (R2 = 0.943, has a good correlation with literature reports and proves usefulness of the method.

  10. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  11. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  12. Application of kernel method in fluorescence molecular tomography

    Science.gov (United States)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  13. [Study of genetic models of maize kernel traits].

    Science.gov (United States)

    Zhang, H W; Kong, F L

    2000-01-01

    Two sets of NCII mating design including 21 different maize inbreds were used to study the genetic models of five maize kernel traits--kernel length, width, ratio of kernel length and width, kernel thickness and weight per 100 kernels. Ten generations including P1, P2, F1, F2, B1, B2 and their reciprocal crosses RF1, RF2, RB1, RB2 were obtained. Three years' data were obtained and analyzed using mainly two methods: (1) precision identification for single cross and (2) mixed liner model MINQUE approach for diallel design. Method 1 showed that kernel traits were primarily controlled by maternal dominance, endosperm additive and dominance effect (maternal dominance > endosperm additive > endosperm dominance). Cytoplasmic effect was detected in one of the two crosses studied. Method 2 revealed that in the total variance of kernel traits, maternal genotypic effect contributed more than 60%, endosperm genotypic effect contributed less than 40%. Cytoplasmic effect only existed in kernel length and 100 kernel weight, with the range of 10% to 30%. The results indicated that kernel genetic performance was quite largely controlled by maternal genotypic effect.

  14. Directed acyclic graph kernels for structural RNA analysis

    Directory of Open Access Journals (Sweden)

    Mituyama Toutai

    2008-07-01

    Full Text Available Abstract Background Recent discoveries of a large variety of important roles for non-coding RNAs (ncRNAs have been reported by numerous researchers. In order to analyze ncRNAs by kernel methods including support vector machines, we propose stem kernels as an extension of string kernels for measuring the similarities between two RNA sequences from the viewpoint of secondary structures. However, applying stem kernels directly to large data sets of ncRNAs is impractical due to their computational complexity. Results We have developed a new technique based on directed acyclic graphs (DAGs derived from base-pairing probability matrices of RNA sequences that significantly increases the computation speed of stem kernels. Furthermore, we propose profile-profile stem kernels for multiple alignments of RNA sequences which utilize base-pairing probability matrices for multiple alignments instead of those for individual sequences. Our kernels outperformed the existing methods with respect to the detection of known ncRNAs and kernel hierarchical clustering. Conclusion Stem kernels can be utilized as a reliable similarity measure of structural RNAs, and can be used in various kernel-based applications.

  15. Directed acyclic graph kernels for structural RNA analysis.

    Science.gov (United States)

    Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi; Sakakibara, Yasubumi

    2008-07-22

    Recent discoveries of a large variety of important roles for non-coding RNAs (ncRNAs) have been reported by numerous researchers. In order to analyze ncRNAs by kernel methods including support vector machines, we propose stem kernels as an extension of string kernels for measuring the similarities between two RNA sequences from the viewpoint of secondary structures. However, applying stem kernels directly to large data sets of ncRNAs is impractical due to their computational complexity. We have developed a new technique based on directed acyclic graphs (DAGs) derived from base-pairing probability matrices of RNA sequences that significantly increases the computation speed of stem kernels. Furthermore, we propose profile-profile stem kernels for multiple alignments of RNA sequences which utilize base-pairing probability matrices for multiple alignments instead of those for individual sequences. Our kernels outperformed the existing methods with respect to the detection of known ncRNAs and kernel hierarchical clustering. Stem kernels can be utilized as a reliable similarity measure of structural RNAs, and can be used in various kernel-based applications.

  16. Towards More Comprehensive Projections of Urban Heat-Related Mortality: Estimates for New York City under Multiple Population, Adaptation, and Climate Scenarios.

    Science.gov (United States)

    Petkova, Elisaveta P; Vink, Jan K; Horton, Radley M; Gasparrini, Antonio; Bader, Daniel A; Francis, Joe D; Kinney, Patrick L

    2017-01-01

    High temperatures have substantial impacts on mortality and, with growing concerns about climate change, numerous studies have developed projections of future heat-related deaths around the world. Projections of temperature-related mortality are often limited by insufficient information to formulate hypotheses about population sensitivity to high temperatures and future demographics. The present study derived projections of temperature-related mortality in New York City by taking into account future patterns of adaptation or demographic change, both of which can have profound influences on future health burdens. We adopted a novel approach to modeling heat adaptation by incorporating an analysis of the observed population response to heat in New York City over the course of eight decades. This approach projected heat-related mortality until the end of the 21st century based on observed trends in adaptation over a substantial portion of the 20th century. In addition, we incorporated a range of new scenarios for population change until the end of the 21st century. We then estimated future heat-related deaths in New York City by combining the changing temperature-mortality relationship and population scenarios with downscaled temperature projections from the 33 global climate models (GCMs) and two Representative Concentration Pathways (RCPs). The median number of projected annual heat-related deaths across the 33 GCMs varied greatly by RCP and adaptation and population change scenario, ranging from 167 to 3,331 in the 2080s compared with 638 heat-related deaths annually between 2000 and 2006. These findings provide a more complete picture of the range of potential future heat-related mortality risks across the 21st century in New York City, and they highlight the importance of both demographic change and adaptation responses in modifying future risks. Citation: Petkova EP, Vink JK, Horton RM, Gasparrini A, Bader DA, Francis JD, Kinney PL. 2017. Towards more

  17. CHARACTERIZATION OF BIO-OIL FROM PALM KERNEL SHELL PYROLYSIS

    Directory of Open Access Journals (Sweden)

    R. Ahmad

    2014-12-01

    Full Text Available Pyrolysis of palm kernel shell in a fixed-bed reactor was studied in this paper. The objectives were to investigate the effect of pyrolysis temperature and particle size on the products yield and to characterize the bio-oil product. In order to get the optimum pyrolysis parameters on bio-oil yield, temperatures of 350, 400, 450, 500 and 550 °C and particle sizes of 212–300 µm, 300–600 µm, 600µm–1.18 mm and 1.18–2.36 mm under a heating rate of 50 °C min-1 were investigated. The maximum bio-oil yield was 38.40% at 450 °C with a heating rate of 50 °C min-1 and a nitrogen sweep gas flow rate of 50 ml min-1. The bio-oil products were analysed by Fourier transform infra-red spectroscopy (FTIR and gas chromatography–mass spectroscopy (GCMS. The FTIR analysis showed that the bio-oil was dominated by oxygenated species. The phenol, phenol, 2-methoxy- and furfural that were identified by GCMS analysis are highly suitable for extraction from the bio-oil as value-added chemicals. The highly oxygenated oils need to be upgraded in order to be used in other applications such as transportation fuels.

  18. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    Science.gov (United States)

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  19. Experimental verification of a method for estimating energy for domestic hot water production in a 2-stage district heating substation

    Energy Technology Data Exchange (ETDEWEB)

    Yliniemi, Kimmo; Delsing, Jerker; van Deventer, Jan [Luleaa University of Technology, Department of Computer Science and Electrical Engineering, Division EISLAB, 971 87 Luleaa (Sweden)

    2009-02-15

    In this paper we compare our estimate of energy consumption for domestic hot water production in a building with the measured value. The energy consumption for hot water production is estimated from the measured total power consumption. The estimation method was developed using computer simulations, and it is based on the assumption that hot water production causes rapid and detectable changes in power consumption. A comparison of our estimates with measurements indicates that the uncertainty in estimation of hot water energy consumption is {+-}10%. Thus, the estimate is comparable to class 3 energy meter measurements, which have an uncertainty of {+-}2-10%. (author)

  20. Estimating tail probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Carr, D.B.; Tolley, H.D.

    1982-12-01

    This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.

  1. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint.

    Science.gov (United States)

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2016-04-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint.

  2. Estimation of annual heat flux balance at the sea surface from sst (NOAA-satellite and ships drift data off southeast Brazil

    Directory of Open Access Journals (Sweden)

    Yoshimine Ikeda

    1985-01-01

    Full Text Available The objective of this work is to study the possibility of estimating the heat flux balance at the sea surface from GOSSTCOMP (Global Ocean Sea Surface Temperature Computation developed by NOAA/NESS, USA, and sea surface current data based from ships drift information obtained from Pilot Charts, published by the Diretoria de Hidrografia e Navegação (DHN, Brazilian Navy. The annual mean value of the heat flux balance at the sea surface off southeast Brazil for 1977, is estimated from data on the balance between the heat transported by the currents and that transported by eddy diffusion for each volume defined as 2º x 2º (Lat. x Long. square with a constant depth equivalent to an oceanic mixed layer, 100 m thick. Results show several oceanic areas where there are net flows of heat from atmosphere towards the sea surface. In front of Rio de Janeiro the heat flow was downward and up to 70 ly day-1 and is probably related to the upwellirug phenomenon normally occurring in that area. Another coastal area between Lat. 25ºS to 28ºS indicated an downward flow up to 50 ly day-1; and for an area south of Lat. 27ºS, Long. 040ºW - 048ºW an downward flow up to 200 ly day-1, where the transfer was probably due to the cold water of a nortward flux from the Falkland (Malvinas Current. Results also show several oceanic areas where net flows of heat (of about -100 ly day-1 were toward the atmosphere. In the oceanic areas Lat. 19ºS - 23ºS and Lat. 24ºS - 30ºS, the flows were probably due to the warm water of a southward flux of the Brazil Current. The resulting fluxes from the warm waters of the Brazil Current when compared with those from warm waters of the Gulf Stream and Kuroshio, indicate that the Gulf Stream carries about 3.3 times and the Kuroshio 1.7 times more heat than the Brazil Current. These values agree with those of data available on the heat fluxes of the above mentioned Currents calculated by different methods (Budyko, 1974.

  3. Occupational heat stress and associated productivity loss estimation using the PHS model (ISO 7933): a case study from workplaces in Chennai, India

    Science.gov (United States)

    Lundgren, Karin; Kuklane, Kalev; Venugopal, Vidhya

    2014-01-01

    Background Heat stress is a major occupational problem in India that can cause adverse health effects and reduce work productivity. This paper explores this problem and its impacts in selected workplaces, including industrial, service, and agricultural sectors in Chennai, India. Design Quantitative measurements of heat stress, workload estimations, and clothing testing, and qualitative information on health impacts, productivity loss, etc., were collected. Heat strain and associated impacts on labour productivity between the seasons were assessed using the International Standard ISO 7933:2004, which applies the Predicted Heat Strain (PHS) model. Results and conclusions All workplaces surveyed had very high heat exposure in the hot season (Wet Bulb Globe Temperature x¯ =29.7), often reaching the international standard safe work values (ISO 7243:1989). Most workers had moderate to high workloads (170–220 W/m2), with some exposed to direct sun. Clothing was found to be problematic, with high insulation values in relation to the heat exposure. Females were found to be more vulnerable because of the extra insulation added from wearing a protective shirt on top of traditional clothing (0.96 clo) while working. When analysing heat strain – in terms of core temperature and dehydration – and associated productivity loss in the PHS model, the parameters showed significant impacts that affected productivity in all workplaces, apart from the laundry facility, especially during the hot season. For example, in the canteen, the core temperature limit of 38°C predicted by the model was reached in only 64 min for women. With the expected increases in temperature due to climate change, additional preventive actions have to be implemented to prevent further productivity losses and adverse health impacts. Overall, this study presented insight into using a thermo-physiological model to estimate productivity loss due to heat exposure in workplaces. This is the first time the PHS

  4. Occupational heat stress and associated productivity loss estimation using the PHS model (ISO 7933: a case study from workplaces in Chennai, India

    Directory of Open Access Journals (Sweden)

    Karin Lundgren

    2014-11-01

    Full Text Available Background: Heat stress is a major occupational problem in India that can cause adverse health effects and reduce work productivity. This paper explores this problem and its impacts in selected workplaces, including industrial, service, and agricultural sectors in Chennai, India. Design: Quantitative measurements of heat stress, workload estimations, and clothing testing, and qualitative information on health impacts, productivity loss, etc., were collected. Heat strain and associated impacts on labour productivity between the seasons were assessed using the International Standard ISO 7933:2004, which applies the Predicted Heat Strain (PHS model. Results and conclusions: All workplaces surveyed had very high heat exposure in the hot season (Wet Bulb Globe Temperature x¯ =29.7, often reaching the international standard safe work values (ISO 7243:1989. Most workers had moderate to high workloads (170–220 W/m2, with some exposed to direct sun. Clothing was found to be problematic, with high insulation values in relation to the heat exposure. Females were found to be more vulnerable because of the extra insulation added from wearing a protective shirt on top of traditional clothing (0.96 clo while working. When analysing heat strain – in terms of core temperature and dehydration – and associated productivity loss in the PHS model, the parameters showed significant impacts that affected productivity in all workplaces, apart from the laundry facility, especially during the hot season. For example, in the canteen, the core temperature limit of 38°C predicted by the model was reached in only 64 min for women. With the expected increases in temperature due to climate change, additional preventive actions have to be implemented to prevent further productivity losses and adverse health impacts. Overall, this study presented insight into using a thermo-physiological model to estimate productivity loss due to heat exposure in workplaces. This is the

  5. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individua...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  6. Chemical and nutrition evaluation of the seed kernel of Balanites ...

    African Journals Online (AJOL)

    ... a chemical score of 34.10%. The protein efficiency ration (PER) and net protein ratio (NPR) for the boiled seed kernel (0.18 and 1.09 respectively) were significantly higher (P<0.05) than the values for unboiled seed kernel (-0.61 and 0.40 respectively). The seed kernel may be a good source of protein in livestock feeds.

  7. Generation of Debugging Interfaces for Linux Kernel Services

    OpenAIRE

    Bissyandé, Tegawendé; Réveillère, Laurent; Lawall, Julia L.; Muller, Gilles

    2011-01-01

    The Linux kernel does not export a stable, well-defined kernel interface, complicating the development of kernel-level services, such as device drivers and file systems. While there does exist a set of functions that are exported to external modules, these are continually changing, and have implicit, ill-documented reconditions, which, if not satisfied, can cause the entire system to crash or hang. However, no specific debugging support is provided. In this paper, we present Diagnosys, an app...

  8. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  9. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    are used to improve the denoising. Moreover, by warping the Reproducing Kernel Hilbert Space (RKHS) we also account for the intrinsic manifold structure yielding a Kernel PCA basis that also benefit from unlabeled data points. Our two main contributions are; (1) a generalization of Kernel PCA......-image problem where denoised feature space points are mapped back into input space. This problem is inherently ill-posed due to the non-bijective feature space mapping. We present a semi-supervised denoising scheme based on kernel PCA and the pre-image problem, where class labels on a subset of the data points...

  10. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...... show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher...

  11. General construction of reproducing kernels on a quaternionic Hilbert space

    Science.gov (United States)

    Thirulogasanthar, K.; Ali, S. Twareque

    A general theory of reproducing kernels and reproducing kernel Hilbert spaces on a right quaternionic Hilbert space is presented. Positive operator-valued measures and their connection to a class of generalized quaternionic coherent states are examined. A Naimark type extension theorem associated with the positive operator-valued measures is proved in a right quaternionic Hilbert space. As illustrative examples, real, complex and quaternionic reproducing kernels and reproducing kernel Hilbert spaces arising from Hermite and Laguerre polynomials are presented. In particular, in the Laguerre case, the Naimark type extension theorem on the associated quaternionic Hilbert space is indicated.

  12. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  13. Occurrence of aflatoxin contamination in maize kernels and ...

    African Journals Online (AJOL)

    Occurrence of aflatoxin contamination in maize kernels and molecular characterization of the producing organism, Aspergillus. Muthusamy Karthikeyan, Arumugam Karthikeyan, Rethinasamy Velazhahan, Srinivasan Madhavan, Thangamuthu Jayaraj ...

  14. Robust kernel collaborative representation for face recognition

    Science.gov (United States)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  15. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    Science.gov (United States)

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  16. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...

  17. Learning Rotation for Kernel Correlation Filter

    KAUST Repository

    Hamdi, Abdullah

    2017-08-11

    Kernel Correlation Filters have shown a very promising scheme for visual tracking in terms of speed and accuracy on several benchmarks. However it suffers from problems that affect its performance like occlusion, rotation and scale change. This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate rotation from one frame to another and enhance the detection of KCF. Hence it gains boost in overall accuracy in many of OBT50 detest videos with minimal additional computation.

  18. 40 Variability Bugs in the Linux Kernel

    DEFF Research Database (Denmark)

    Abal Rivas, Iago; Brabrand, Claus; Wasowski, Andrzej

    2014-01-01

    Feature-sensitive verification is a recent field that pursues the effective analysis of the exponential number of variants of a program family. Today researchers lack examples of concrete bugs induced by variability, and occurring in real large-scale software. Such a collection of bugs...... variability affects and increases the complexity of software bugs....... is a requirement for goal-oriented research, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 40 variability bugs collected from bug-fixing commits to the Linux kernel repository. We investigate each of the 40 bugs, recording...

  19. Characterization of Flour from Avocado Seed Kernel

    OpenAIRE

    Macey A. Mahawan; Ma. Francia N. Tenorio; Jaycel A. Gomez; Rosenda A. Bronce

    2015-01-01

    The study focused on the Characterization of Flour from Avocado Seed Kernel. Based on the findings of the study the percentages of crude protein, crude fiber, crude fat, total carbohydrates, ash and moisture were 7.75, 4.91, 0.71, 74.65, 2.83 and 14.05 respectively. On the other hand the falling number was 495 seconds while gluten was below the detection limit of the method used. Moreover, the sensory evaluation in terms of color, texture and aroma in 0% proportion of Avocado seed flour was m...

  20. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...

  1. Robust Nonlinear Regression: A Greedy Approach Employing Kernels With Application to Image Denoising

    Science.gov (United States)

    Papageorgiou, George; Bouboulis, Pantelis; Theodoridis, Sergios

    2017-08-01

    We consider the task of robust non-linear regression in the presence of both inlier noise and outliers. Assuming that the unknown non-linear function belongs to a Reproducing Kernel Hilbert Space (RKHS), our goal is to estimate the set of the associated unknown parameters. Due to the presence of outliers, common techniques such as the Kernel Ridge Regression (KRR) or the Support Vector Regression (SVR) turn out to be inadequate. Instead, we employ sparse modeling arguments to explicitly model and estimate the outliers, adopting a greedy approach. The proposed robust scheme, i.e., Kernel Greedy Algorithm for Robust Denoising (KGARD), is inspired by the classical Orthogonal Matching Pursuit (OMP) algorithm. Specifically, the proposed method alternates between a KRR task and an OMP-like selection step. Theoretical results concerning the identification of the outliers are provided. Moreover, KGARD is compared against other cutting edge methods, where its performance is evaluated via a set of experiments with various types of noise. Finally, the proposed robust estimation framework is applied to the task of image denoising, and its enhanced performance in the presence of outliers is demonstrated.

  2. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  3. Generalized Langevin equation with tempered memory kernel

    Science.gov (United States)

    Liemert, André; Sandev, Trifce; Kantz, Holger

    2017-01-01

    We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.

  4. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  5. On the Equality Assumption of Latent and Sensible Heat Energy Transfer Coefficients of the Bowen Ratio Theory for Evapotranspiration Estimations: Another Look at the Potential Causes of Inequalities

    Directory of Open Access Journals (Sweden)

    Suat Irmak

    2014-08-01

    Full Text Available Evapotranspiration (ET and sensible heat (H flux play a critical role in climate change; micrometeorology; atmospheric investigations; and related studies. They are two of the driving variables in climate impact(s and hydrologic balance dynamics. Therefore, their accurate estimate is important for more robust modeling of the aforementioned relationships. The Bowen ratio energy balance method of estimating ET and H diffusions depends on the assumption that the diffusivities of latent heat (KV and sensible heat (KH are always equal. This assumption is re-visited and analyzed for a subsurface drip-irrigated field in south central Nebraska. The inequality dynamics for subsurface drip-irrigated conditions have not been studied. Potential causes that lead KV to differ from KH and a rectification procedure for the errors introduced by the inequalities were investigated. Actual ET; H; and other surface energy flux parameters using an eddy covariance system and a Bowen Ratio Energy Balance System (located side by side on an hourly basis were measured continuously for two consecutive years for a non-stressed and subsurface drip-irrigated maize canopy. Most of the differences between KV and KH appeared towards the higher values of KV and KH. Although it was observed that KV was predominantly higher than KH; there were considerable data points showing the opposite. In general; daily KV ranges from about 0.1 m2∙s−1 to 1.6 m2∙s−1; and KH ranges from about 0.05 m2∙s−1 to 1.1 m2∙s−1. The higher values for KV and KH appear around March and April; and around September and October. The lower values appear around mid to late December and around late June to early July. Hourly estimates of KV range between approximately 0 m2∙s−1 to 1.8 m2∙s−1 and that of KH ranges approximately between 0 m2∙s−1 to 1.7 m2∙s−1. The inequalities between KV and KH varied diurnally as well as seasonally. The inequalities were greater during the non

  6. Calibration of sap flow estimated by the compensation heat pulse method in olive, plum and orange trees: relationships with xylem anatomy.

    Science.gov (United States)

    Fernández, J E; Durán, P J; Palomo, M J; Diaz-Espejo, A; Chamorro, V; Girón, I F

    2006-06-01

    The compensation heat pulse method is widely used to estimate sap flow in conducting organs of woody plants. Being an invasive technique, calibration is crucial to derive correction factors for accurately estimating the sap flow value from the measured heat pulse velocity. We compared the results of excision and perfusion calibration experiments made with mature olive (Olea europaea L. 'Manzanilla de Sevilla'), plum (Prunus domestica L. 'Songal') and orange (Citrus sinensis (L.) Osbeck. 'Cadenero') trees. The calibration experiments were designed according to current knowledge on the application of the technique and the analysis of measured heat pulse velocities. Data on xylem characteristics were obtained from the experimental trees and related to the results of the calibration experiments. The most accurate sap flow values were obtained by assuming a wound width of 2.0 mm for olive and 2.4 mm for plum and orange. Although the three possible methods of integrating the sap velocity profiles produced similar results for all three species, the best results were obtained by calculating sap flow as the weighted sum of the product of sap velocity and the associated sapwood area across the four sensors of the heat-pulse-velocity probes. Anatomical observations showed that the xylem of the studied species can be considered thermally homogeneous. Vessel lumen diameter in orange trees was about twice that in the olive and plum, but vessel density was less than half. Total vessel lumen area per transverse section of xylem tissue was greater in plum than in the other species. These and other anatomical and hydraulic differences may account for the different calibration results obtained for each species.

  7. Homotopy deform method for reproducing kernel space for ...

    Indian Academy of Sciences (India)

    In this paper, the combination of homotopy deform method (HDM) and simplified reproducing kernel method (SRKM) is introduced for solving the boundary value problems (BVPs) of nonlinear differential equations. The solution methodology is based on Adomian decomposition and reproducing kernel method (RKM).

  8. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    code exists which allows for fast data exploration and experimentation with smaller datasets. Computationally demanding kernelization of test data with training data and kernel image projections have been programmed to run on massively parallel CUDA-enabled graphics processors, when available, giving...

  9. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    Science.gov (United States)

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  10. Screening of the kernels of Pentadesma butyracea from various ...

    African Journals Online (AJOL)

    Gwla10

    butyracea was used to retard the ageing of skin in patented cosmetic preparation (Courtin, 1986). So far, the processing of the P. butyracea kernels into butter is artisanal and rather a tedious activity done by rural women (Sinsin and Sinadouwirou, 2003). Basically, the butter extraction from the P. butyracea kernel involves ...

  11. Oven-drying reduces ruminal starch degradation in maize kernels

    NARCIS (Netherlands)

    Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.

    2014-01-01

    The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels

  12. Nutritional status of palm kernel meal inoculated with Trichoderma ...

    African Journals Online (AJOL)

    The ability of Trichoderma harzanium to improve the nutritional status of palm kernel meal (P K M) was assessed over forty days of fermentation. Fermentation within this time period induced various changes in the proximate and mineral analysis of the palm kernel meal. Comparatively, the highest crude protein and ether ...

  13. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard

    2011-01-01

    that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher discriminant, and the SVM...

  14. Ambered kernels in stenospermocarpic fruit of eastern black walnut

    Science.gov (United States)

    Michele R. Warmund; J.W. Van Sambeek

    2014-01-01

    "Ambers" is a term used to describe poorly filled, shriveled eastern black walnut (Juglans nigra L.) kernels with a dark brown or black-colored pellicle that are unmarketable. Studies were conducted to determine the incidence of ambered black walnut kernels and to ascertain when symptoms were apparent in specific tissues. The occurrence of...

  15. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  16. Efficient Kernel-based 2DPCA for Smile Stages Recognition

    Directory of Open Access Journals (Sweden)

    Fitri Damayanti

    2012-03-01

    Full Text Available Recently, an approach called two-dimensional principal component analysis (2DPCA has been proposed for smile stages representation and recognition. The essence of 2DPCA is that it computes the eigenvectors of the so-called image covariance matrix without matrix-to-vector conversion so the size of the image covariance matrix are much smaller, easier to evaluate covariance matrix, computation cost is reduced and the performance is also improved than traditional PCA. In an effort to improve and perfect the performance of smile stages recognition, in this paper, we propose efficient Kernel based 2DPCA concepts. The Kernelization of 2DPCA can be benefit to develop the nonlinear structures in the input data. This paper discusses comparison of standard Kernel based 2DPCA and efficient Kernel based 2DPCA for smile stages recognition. The results of experiments show that Kernel based 2DPCA achieve better performance in comparison with the other approaches. While the use of efficient Kernel based 2DPCA can speed up the training procedure of standard Kernel based 2DPCA thus the algorithm can achieve much more computational efficiency and remarkably save the memory consuming compared to the standard Kernel based 2DPCA.

  17. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  18. Knowledge-Based Green's Kernel for Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Tahir Farooq

    2010-01-01

    Full Text Available This paper presents a novel prior knowledge-based Green's kernel for support vector regression (SVR. After reviewing the correspondence between support vector kernels used in support vector machines (SVMs and regularization operators used in regularization networks and the use of Green's function of their corresponding regularization operators to construct support vector kernels, a mathematical framework is presented to obtain the domain knowledge about magnitude of the Fourier transform of the function to be predicted and design a prior knowledge-based Green's kernel that exhibits optimal regularization properties by using the concept of matched filters. The matched filter behavior of the proposed kernel function makes it suitable for signals corrupted with noise that includes many real world systems. We conduct several experiments mostly using benchmark datasets to compare the performance of our proposed technique with the results already published in literature for other existing support vector kernel over a variety of settings including different noise levels, noise models, loss functions, and SVM variations. Experimental results indicate that knowledge-based Green's kernel could be seen as a good choice among the other candidate kernel functions.

  19. Palm kernel agar: An alternative culture medium for rapid detection ...

    African Journals Online (AJOL)

    Palm kernel agar: An alternative culture medium for rapid detection of aflatoxins in agricultural commodities. ... a pink background and blue or blue green fluorescence of palm kernel agar Under long wave UV light (366nm) as against the white background of DCA, which often interferes with fluorescence with corresponding ...

  20. Palm kernel shell as aggregate for light weight concrete | Idah ...

    African Journals Online (AJOL)

    In this study, the effect of replacing the conventional gravel with palm. kernel shell as aggregates in making concrete was inquired into. Several . volumes of palm kernel shells were used in two (4) different proportions with the other constituents and the strength of the concretes produced were tested to ascertain the effect of ...

  1. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  2. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  3. Base catalyzed transesterification of wild apricot kernel oil for ...

    African Journals Online (AJOL)

    Prunus armeniaca L. grows wildly and is also cultivated at higher altitudes in temperate regions of Pakistan. Its kernel is a rich source of oil but its biodiesel production properties have not yet been exploited. During the present investigation, some quality parameters of kernel oil like acid value, free fatty acid content (as oleic ...

  4. The direction of turbulent heat flux by a direct measurement was opposite to an indirect estimation over calm oceans in summer

    Science.gov (United States)

    Ando, Y.; Tachibana, Y.; Konda, M.; Maekawa, Y.; Nakamura, T.; Okada, K.

    2016-12-01

    In order to understand the air-sea interaction in the climate system, a direct measurement of turbulent flux (the eddy covariance method) is necessary as well as an indirect estimate of that, for example, the bulk method. However, it is rarely experienced due to many difficulties over a long period. The most of the difficulties comes from the moving platform, which is not fixed as over land because the anemometer moves with the platform as the ship moves. The ship motion correction for the wind velocity is a very difficult issue. On the top of the foremast of the training ship SEISUIMARU (Mie University, Japan), the developed on-board eddy covariance system was installed in 2009. The training ship SEISUIMARU cruises the northwest Pacific Ocean, especially the coast of Japan every year. This system is routinely operating in her whole cruises. When these kind of continuous measurements are integrated in these areas, reliable direct measurement of turbulent flux database can be established.We compared the turbulent heat fluxes using the bulk method and those of the eddy covariance method using this database. The results were some differences in certain conditions. In some cases, the direction of the turbulent heat fluxes was opposite. This might be caused by the sea surface temperatures (SSTs) using the bulk method were temperatures at the depth of 3 meter instead of the surface temperatures. SSTs are warmer than bulk 3 meter temperatures in high-level solar radiation and low-wind conditions, as expected due to the sea surface warming effects over calm oceans in summer. The difference of the turbulent heat fluxes using the eddy covariance method and those of the bulk method was positively correlated with the solar radiation. Moreover, the difference was negatively correlated with the surface wind speed. These results were more clear in the sensible heat fluxes than that of the latent heat fluxes. These results suggest one of difference of the turbulent heat fluxes

  5. A simple mathematical procedure to estimate heat flux in machining using measured surface temperature with infrared laser

    Directory of Open Access Journals (Sweden)

    Hocine Mzad

    2015-09-01

    Full Text Available Several techniques have been developed over time for the measurement of heat and the temperatures generated in various manufacturing processes and tribological applications. Each technique has its own advantages and disadvantages. The appropriate technique for temperature measurement depends on the application under consideration as well as the available tools for measurement. This paper presents a procedure for a simple and accurate determination of the time-varying heat flux at the workpiece–tool interface of three different metals under known cutting conditions. A portable infrared thermometer is used for surface temperature measurements. A spline smoothing interpolation of the surface temperature history enables to determine the local heat flux produced during stock removal. The measured temperature is represented by a third-order spline approximation. Nonetheless, the accuracy of polynomial interpolation depends on how close are the interpolated points; an increase in degree cannot be used to increase the accuracy. Although the data analysis is relatively complicated, the computing time is very small.

  6. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  7. A COMPARISON STUDY OF DIFFERENT KERNEL FUNCTIONS FOR SVM-BASED CLASSIFICATION OF MULTI-TEMPORAL POLARIMETRY SAR DATA

    Directory of Open Access Journals (Sweden)

    B. Yekkehkhany

    2014-10-01

    Full Text Available In this paper, a framework is developed based on Support Vector Machines (SVM for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF. The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  8. Skin temperature and heart rate can be used to estimate physiological strain during exercise in the heat in a cohort of fit and unfit males.

    Science.gov (United States)

    Cuddy, John S; Buller, Mark; Hailes, Walter S; Ruby, Brent C

    2013-07-01

    To evaluate the previously developed physiological strain index (PSI) model using heart rate and skin temperature to provide further insight into the detection and estimation of thermal and physiological heat strain indices. A secondary aim was to characterize individuals who excel in their performance in the heat. 56 male participants completed 2 walking trials (3.5 miles per hour, 5% grade) in controlled environments of 43.3 °C and 15.5 °C (40% humidity). Core and skin temperature, along with heart rate and PSI, were continually monitored during exercise. Participants completed a physical fitness test. The logistic regression model exhibited 4 false positives and 1 false negative at the 40% decision boundary. The "Not at Risk" group (N = 33) had higher body weight (84 ± 13 vs. 77 ± 10 kg, respectively) compared to the "At Risk" (N = 23) group, p Heat Trial, the "At Risk" group had a higher rating of perceived exertion at 60 and 90 minutes compared to the "Not at Risk" group (13.5 ± 2.8 vs. 11.5 ± 1.8 and 14.8 ± 3.2 vs. 12.2 ± 2.0 for "At Risk" vs. "Not at Risk" at 60 and 90 minutes, respectively), p rate and skin temperature to PSI is highly accurate at assessing heat risk status. Participants classified as "At Risk" had lower physical performance scores and different body weights compared to the "Not at Risk" group and perceived themselves as working harder during exercise in the heat. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  9. Evaluation of palm kernel fibers (PKFs for production of asbestos-free automotive brake pads

    Directory of Open Access Journals (Sweden)

    K.K. Ikpambese

    2016-01-01

    Full Text Available In this study, asbestos-free automotive brake pads produced from palm kernel fibers with epoxy-resin binder was evaluated. Resins varied in formulations and properties such as friction coefficient, wear rate, hardness test, porosity, noise level, temperature, specific gravity, stopping time, moisture effects, surface roughness, oil and water absorptions rates, and microstructure examination were investigated. Other basic engineering properties of mechanical overload, thermal deformation fading behaviour shear strength, cracking resistance, over-heat recovery, and effect on rotor disc, caliper pressure, pad grip effect and pad dusting effect were also investigated. The results obtained indicated that the wear rate, coefficient of friction, noise level, temperature, and stopping time of the produced brake pads increased as the speed increases. The results also show that porosity, hardness, moisture content, specific gravity, surface roughness, and oil and water absorption rates remained constant with increase in speed. The result of microstructure examination revealed that worm surfaces were characterized by abrasion wear where the asperities were ploughed thereby exposing the white region of palm kernel fibers, thus increasing the smoothness of the friction materials. Sample S6 with composition of 40% epoxy-resin, 10% palm wastes, 6% Al2O3, 29% graphite, and 15% calcium carbonate gave better properties. The result indicated that palm kernel fibers can be effectively used as a replacement for asbestos in brake pad production.

  10. Syngas production from olive tree cuttings and olive kernels in a downdraft fixed-bed gasifier

    Energy Technology Data Exchange (ETDEWEB)

    Skoulou, V.; Zabaniotou, A. [Laboratory of Plant Design, Department of Chemical Engineering, Aristotle University of Thessaloniki, University Box 455, University Campus, Thessaloniki 54124 (Greece); Stavropoulos, G.; Sakelaropoulos, G. [Chemical Process Engineering Laboratory (CPEL), Department of Chemical Engineering, Aristotle University of Thessaloniki, University Box 455, University Campus, Thessalonki 54124 (Greece)

    2008-02-15

    This study presents a laboratory fixed-bed gasification of olive kernels and olive tree cuttings. Gasification took place with air, in a temperature range of 750-950 C, for various air equivalence ratios (0.14-0.42) and under atmospheric pressure. In each run, the main components of the gas phase were CO, CO{sub 2}, H{sub 2} and CH{sub 4}. Experimental results showed that gasification with air at high temperatures (950 C) favoured gas yields. Syngas production increased with reactor temperature, while CO{sub 2}, CH{sub 4}, light hydrocarbons and tar followed the opposite trend. An increase of the air equivalence ratio decreased syngas production and lowered the product gas heating value, while favouring tar destruction. It was found that gas from olive tree cuttings at 950 C and with an air equivalence ratio of 0.42 had a higher LHV (9.41MJ/Nm{sup 3}) in comparison to olive kernels (8.60MJ/Nm{sup 3}). Olive kernels produced more char with a higher content of fixed carbon (16.39 w/w%) than olive tree cuttings; thus, they might be considered an attractive source for carbonaceous material production. (author)

  11. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  12. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    CERN Document Server

    Frontiere, Nicholas; Owen, J Michael

    2016-01-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that employs a first-order consistent reproducing kernel function, exactly interpolating linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, momentum, and energy are all manifestly conserved without any assumption about kernel symmetries. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains the benefits of traditional SPH methods (such as preserving Galilean invariance and manif...

  13. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  14. Nonparametric estimation in trend-renewal processes

    OpenAIRE

    Haugedal, Per Erik

    2017-01-01

    This thesis gives an introduction to stochastic modeling of repairable systems with failure and maintenance data, in particular the nonhomogeneous Poisson process and the trend-renewal process. It is studying kernel-based methods for nonparametric estimation of the trend function of trend-renewal processes and presents a method using weighted kernel estimation. These weights are found by maximization of the likelihood function that they are included in. The method is then tested on both real ...

  15. Coupled Estimation of Surface Heat fluxes and Vegetation Dynamics From Remotely Sensed Land Surface Temperature and Fraction of Photosynthetically Active Radiation

    Science.gov (United States)

    Castelli, F.; Bateni, S.; Entekhabi, D.

    2011-12-01

    Remotely sensed Land Surface Temperature (LST) and Fraction of Photosynthetically Active Radiation absorbed by vegetation (FPAR) are assimilated respectively into the Surface Energy Balance (SEB) equation and a Vegetation Dynamics Model (VDM) in order to estimate surface fluxes and vegetation dynamics. The problem is posed in terms of three unknown and dimensionless parameters: (1) neutral bulk heat transfer coefficient that scales the sum of turbulent fluxes, (2) evaporative fractions for soil and canopy, which represent partitioning among the turbulent fluxes over soil and vegetation, and (3) specific leaf area, which captures seasonal phenology and vegetation dynamics. The model is applied over the Gourma site in Mali, the northern edge of the West African Monsoon (WAM) domain. The application of model over the Gourma site shows that remotely sensed FPAR observations can constrain the VDM and retrieve its main unknown parameter (specific leaf area) over large-scale domains without costly in situ measurements. The results indicate that the estimated specific leaf area values vary reasonably with the influential environmental variables such as precipitation, air temperature, and solar radiation. Assimilating FPAR observations into the VDM can also provide Leaf Area Index (LAI) dynamics. The retrieved LAI values are comparable in magnitude, spatial pattern and temporal evolution with observations. Moreover, it is demonstrated that the spatial patterns of estimated neutral bulk heat transfer coefficient resemble those of observed vegetation index even though no explicit information on vegetation phenology is used in the model. Furthermore, the day-to-day variations in the retrieved evaporative fraction values are consistent with wetting and drydown events. Finally, it is found that evaporative fraction is strongly correlated to LAI when soil surface is dry because in this condition soil evaporation is an insignificant component of latent heat flux, and therefore

  16. Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series

    DEFF Research Database (Denmark)

    Gao, Jiti; Kanaya, Shin; Li, Degui

    2015-01-01

    This paper establishes uniform consistency results for nonparametric kernel density and regression estimators when time series regressors concerned are nonstationary null recurrent Markov chains. Under suitable regularity conditions, we derive uniform convergence rates of the estimators. Our resu...

  17. Image quality of mixed convolution kernel in thoracic computed tomography

    Science.gov (United States)

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-01-01

    Abstract The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images. Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test. Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001). The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT. PMID:27858910

  18. 3-D sensitivity kernels of the Rayleigh wave ellipticity

    Science.gov (United States)

    Maupin, Valérie

    2017-10-01

    The ellipticity of the Rayleigh wave at the surface depends on the seismic structure beneath and in the vicinity of the seismological station where it is measured. We derive here the expression and compute the 3-D kernels that describe this dependence with respect to S-wave velocity, P-wave velocity and density. Near-field terms as well as coupling to Love waves are included in the expressions. We show that the ellipticity kernels are the difference between the amplitude kernels of the radial and vertical components of motion. They show maximum values close to the station, but with a complex pattern, even when smoothing in a finite-frequency range is used to remove the oscillatory pattern present in mono-frequency kernels. In order to follow the usual data processing flow, we also compute and analyse the kernels of the ellipticity averaged over incoming wave backazimuth. The kernel with respect to P-wave velocity has the simplest lateral variation and is in good agreement with commonly used 1-D kernels. The kernels with respect to S-wave velocity and density are more complex and we have not been able to find a good correlation between the 3-D and 1-D kernels. Although it is clear that the ellipticity is mostly sensitive to the structure within half-a-wavelength of the station, the complexity of the kernels within this zone prevents simple approximations like a depth dependence times a lateral variation to be useful in the inversion of the ellipticity.

  19. Image quality of mixed convolution kernel in thoracic computed tomography.

    Science.gov (United States)

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  20. Kernel descriptors for chest x-ray analysis

    Science.gov (United States)

    Orbán, Gergely Gy.; Horváth, Gábor

    2017-03-01

    In this study, we address the problem of lesion classification in radiographic scans. We adapt image kernel functions to be applicable for high-resolution, grayscale images to improve the classification accuracy of a support vector machine. We take existing kernel functions inspired by the histogram of oriented gradients, and derive an approximation that can be evaluated in linear time of the image size instead of the original quadratic complexity, enabling highresolution input. Moreover, we propose a new variant inspired by the matched filter, to better utilize intensity space. The new kernels are improved to be scale-invariant and combined with a Gaussian kernel built from handcrafted image features. We introduce a simple multiple kernel learning framework that is robust when one of the kernels, in the current case the image feature kernel, dominates the others. The combined kernel is input to a support vector classifier. We tested our method on lesion classification both in chest radiographs and digital tomosynthesis scans. The radiographs originated from a database including 364 patients with lung nodules and 150 healthy cases. The digital tomosynthesis scans were obtained by simulation using 91 CT scans from the LIDC-IDRI database as input. The new kernels showed good separation capability: ROC AuC was in [0.827, 0.853] for the radiograph database and 0.763 for the tomosynthesis scans. Adding the new kernels to the image-feature-based classifier significantly improved accuracy: AuC increased from 0.958 to 0.967 and from 0.788 to 0.801 for the two applications.