Hamilton's gradient estimate for the heat kernel on complete manifolds
Kotschwar, Brett
2007-01-01
In this paper we extend a gradient estimate of R. Hamilton for positive solutions to the heat equation on closed manifolds to bounded positive solutions on complete, non-compact manifolds with $Rc \\geq -Kg$. We accomplish this extension via a maximum principle of L. Karp and P. Li and a Bernstein-type estimate on the gradient of the solution. An application of our result, together with the bounds of P. Li and S.T. Yau, yields an estimate on the gradient of the heat kernel for complete manifol...
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Lévy matters VI Lévy-type processes moments, construction and heat kernel estimates
Kühn, Franziska
2017-01-01
Presenting some recent results on the construction and the moments of Lévy-type processes, the focus of this volume is on a new existence theorem, which is proved using a parametrix construction. Applications range from heat kernel estimates for a class of Lévy-type processes to existence and uniqueness theorems for Lévy-driven stochastic differential equations with Hölder continuous coefficients. Moreover, necessary and sufficient conditions for the existence of moments of Lévy-type processes are studied and some estimates on moments are derived. Lévy-type processes behave locally like Lévy processes but, in contrast to Lévy processes, they are not homogeneous in space. Typical examples are processes with varying index of stability and solutions of Lévy-driven stochastic differential equations. This is the sixth volume in a subseries of the Lecture Notes in Mathematics called Lévy Matters. Each volume describes a number of important topics in the theory or applicati ons of Lévy processes and pays ...
DEFF Research Database (Denmark)
Gimperlein, Heiko; Grubb, Gerd
2014-01-01
The purpose of this article is to establish upper and lower estimates for the integral kernel of the semigroup exp(−t P) associated to a classical, strongly elliptic pseudodifferential operator P of positive order on a closed manifold. The Poissonian bounds generalize those obtained for perturbat......The purpose of this article is to establish upper and lower estimates for the integral kernel of the semigroup exp(−t P) associated to a classical, strongly elliptic pseudodifferential operator P of positive order on a closed manifold. The Poissonian bounds generalize those obtained...... for perturbations of fractional powers of the Laplacian. In the selfadjoint case, extensions to t∈C+ are studied. In particular, our results apply to the Dirichlet-to-Neumann semigroup....
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Heat kernel analysis for Bessel operators on symmetric cones
DEFF Research Database (Denmark)
Möllers, Jan
2014-01-01
. The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...
Heat kernels and zeta functions on fractals
International Nuclear Information System (INIS)
Dunne, Gerald V
2012-01-01
On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
Consistent Estimation of Pricing Kernels from Noisy Price Data
Vladislav Kargin
2003-01-01
If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.
The Kernel Estimation in Biosystems Engineering
Directory of Open Access Journals (Sweden)
Esperanza Ayuga Téllez
2008-04-01
Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.
Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM
Directory of Open Access Journals (Sweden)
Chenchao Zhao
2018-01-01
Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.
Discrete non-parametric kernel estimation for global sensitivity analysis
International Nuclear Information System (INIS)
Senga Kiessé, Tristan; Ventura, Anne
2016-01-01
This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.
Heat Kernel Asymptotics of Zaremba Boundary Value Problem
Energy Technology Data Exchange (ETDEWEB)
Avramidi, Ivan G. [Department of Mathematics, New Mexico Institute of Mining and Technology (United States)], E-mail: iavramid@nmt.edu
2004-03-15
The Zaremba boundary-value problem is a boundary value problem for Laplace-type second-order partial differential operators acting on smooth sections of a vector bundle over a smooth compact Riemannian manifold with smooth boundary but with discontinuous boundary conditions, which include Dirichlet boundary conditions on one part of the boundary and Neumann boundary conditions on another part of the boundary. We study the heat kernel asymptotics of Zaremba boundary value problem. The construction of the asymptotic solution of the heat equation is described in detail and the heat kernel is computed explicitly in the leading approximation. Some of the first nontrivial coefficients of the heat kernel asymptotic expansion are computed explicitly.
On Improving Convergence Rates for Nonnegative Kernel Density Estimators
Terrell, George R.; Scott, David W.
1980-01-01
To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...
Modelling microwave heating of discrete samples of oil palm kernels
International Nuclear Information System (INIS)
Law, M.C.; Liew, E.L.; Chang, S.L.; Chan, Y.S.; Leo, C.P.
2016-01-01
Highlights: • Microwave (MW) drying of oil palm kernels is experimentally determined and modelled. • MW heating of discrete samples of oil palm kernels (OPKs) is simulated. • OPK heating is due to contact effect, MW interference and heat transfer mechanisms. • Electric field vectors circulate within OPKs sample. • Loosely-packed arrangement improves temperature uniformity of OPKs. - Abstract: Recently, microwave (MW) pre-treatment of fresh palm fruits has showed to be environmentally friendly compared to the existing oil palm milling process as it eliminates the condensate production of palm oil mill effluent (POME) in the sterilization process. Moreover, MW-treated oil palm fruits (OPF) also possess better oil quality. In this work, the MW drying kinetic of the oil palm kernels (OPK) was determined experimentally. Microwave heating/drying of oil palm kernels was modelled and validated. The simulation results show that temperature of an OPK is not the same over the entire surface due to constructive and destructive interferences of MW irradiance. The volume-averaged temperature of an OPK is higher than its surface temperature by 3–7 °C, depending on the MW input power. This implies that point measurement of temperature reading is inadequate to determine the temperature history of the OPK during the microwave heating process. The simulation results also show that arrangement of OPKs in a MW cavity affects the kernel temperature profile. The heating of OPKs were identified to be affected by factors such as local electric field intensity due to MW absorption, refraction, interference, the contact effect between kernels and also heat transfer mechanisms. The thermal gradient patterns of OPKs change as the heating continues. The cracking of OPKs is expected to occur first in the core of the kernel and then it propagates to the kernel surface. The model indicates that drying of OPKs is a much slower process compared to its MW heating. The model is useful
Improved Variable Window Kernel Estimates of Probability Densities
Hall, Peter; Hu, Tien Chung; Marron, J. S.
1995-01-01
Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...
On convergence of kernel learning estimators
Norkin, V.I.; Keyzer, M.A.
2009-01-01
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability
The heat kernel as the pagerank of a graph
Chung, Fan
2007-01-01
The concept of pagerank was first started as a way for determining the ranking of Web pages by Web search engines. Based on relations in interconnected networks, pagerank has become a major tool for addressing fundamental problems arising in general graphs, especially for large information networks with hundreds of thousands of nodes. A notable notion of pagerank, introduced by Brin and Page and denoted by PageRank, is based on random walks as a geometric sum. In this paper, we consider a notion of pagerank that is based on the (discrete) heat kernel and can be expressed as an exponential sum of random walks. The heat kernel satisfies the heat equation and can be used to analyze many useful properties of random walks in a graph. A local Cheeger inequality is established, which implies that, by focusing on cuts determined by linear orderings of vertices using the heat kernel pageranks, the resulting partition is within a quadratic factor of the optimum. This is true, even if we restrict the volume of the small part separated by the cut to be close to some specified target value. This leads to a graph partitioning algorithm for which the running time is proportional to the size of the targeted volume (instead of the size of the whole graph).
Kernel bandwidth estimation for non-parametric density estimation: a comparative study
CSIR Research Space (South Africa)
Van der Walt, CM
2013-12-01
Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...
Moderate deviations principles for the kernel estimator of ...
African Journals Online (AJOL)
Abstract. The aim of this paper is to provide pointwise and uniform moderate deviations principles for the kernel estimator of a nonrandom regression function. Moreover, we give an application of these moderate deviations principles to the construction of condence regions for the regression function. Resume. L'objectif de ...
Corruption clubs: empirical evidence from kernel density estimates
Herzfeld, T.; Weiss, Ch.
2007-01-01
A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to
Optimal Bandwidth Selection for Kernel Density Functionals Estimation
Directory of Open Access Journals (Sweden)
Su Chen
2015-01-01
Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.
Observing integrals of heat kernels from a distance
DEFF Research Database (Denmark)
Heat kernels have integrals such as Brownian motion mean exit time, potential capacity, and torsional rigidity. We show how to obtain bounds on these values - essentially by observing their behaviour in terms of the distance function from a point and then comparing with corresponding values in ta...... and discussed as test cases. The talk is based on joint work with Vicente Palmer....... in tailor-made warped product spaces. The results will be illustrated by applications to the so-called 'type' problem: How to decide if a given manifold or surface is transient (hyperbolic) or recurrent (parabolic). Specific examples of minimal surfaces and constant pressure dry foams will be shown...
A multi-resolution approach to heat kernels on discrete surfaces
Vaxman, Amir
2010-07-26
Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.
Heat kernel expansion in the background field formalism
Barvinsky, Andrei
2015-01-01
Heat kernel expansion and background field formalism represent the combination of two calculational methods within the functional approach to quantum field theory. This approach implies construction of generating functionals for matrix elements and expectation values of physical observables. These are functionals of arbitrary external sources or the mean field of a generic configuration -- the background field. Exact calculation of quantum effects on a generic background is impossible. However, a special integral (proper time) representation for the Green's function of the wave operator -- the propagator of the theory -- and its expansion in the ultraviolet and infrared limits of respectively short and late proper time parameter allow one to construct approximations which are valid on generic background fields. Current progress of quantum field theory, its renormalization properties, model building in unification of fundamental physical interactions and QFT applications in high energy physics, gravitation and...
The heating of UO_2 kernels in argon gas medium on the physical properties of sintered UO_2 kernels
International Nuclear Information System (INIS)
Damunir; Sri Rinanti Susilowati; Ariyani Kusuma Dewi
2015-01-01
The heating of UO_2 kernels in argon gas medium on the physical properties of sinter UO_2 kernels was conducted. The heated of the UO_2 kernels was conducted in a sinter reactor of a bed type. The sample used was the UO_2 kernels resulted from the reduction results at 800 °C temperature for 3 hours that had the density of 8.13 g/cm"3; porosity of 0.26; O/U ratio of 2.05; diameter of 1146 μm and sphericity of 1.05. The sample was put into a sinter reactor, then it was vacuumed by flowing the argon gas at 180 mmHg pressure to drain the air from the reactor. After that, the cooling water and argon gas were continuously flowed with the pressure of 5 mPa with 1.5 liter/minutes velocity. The reactor temperature was increased and variated at 1200-1500 °C temperature and for 1-4 hours. The sinters UO_2 kernels resulted from the study were analyzed in term of their physical properties including the density, porosity, diameter, sphericity, and specific surface area. The density was analyzed using pycnometer with CCl_4 solution. The porosity was determined using Haynes equation. The diameters and sphericity were showed using the Dino-lite microscope. The specific surface area was determined using surface area meter Nova-1000. The obtained products showed the the heating of UO_2 kernel in argon gas medium were influenced on the physical properties of sinters UO_2 kernel. The condition of best relatively at 1400 °C temperature and 2 hours time. The product resulted from the study was relatively at its best when heating was conducted at 1400 °C temperature and 2 hours time, produced sinters UO_2 kernel with density of 10.14 gr/ml; porosity of 7 %; diameters of 893 μm; sphericity of 1.07 and specific surface area of 4.68 m"2/g with solidify shrinkage of 22 %. (author)
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.
Modeling reactive transport with particle tracking and kernel estimators
Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-04-01
Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.
Directory of Open Access Journals (Sweden)
Yang Xiao-Jun
2016-01-01
Full Text Available In this article we propose a new fractional derivative without singular kernel. We consider the potential application for modeling the steady heat-conduction problem. The analytical solution of the fractional-order heat flow is also obtained by means of the Laplace transform.
A survey of kernel-type estimators for copula and their applications
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
On Convergence of Kernel Density Estimates in Particle Filtering
Czech Academy of Sciences Publication Activity Database
Coufal, David
2016-01-01
Roč. 52, č. 5 (2016), s. 735-756 ISSN 0023-5954 Grant - others:GA ČR(CZ) GA16-03708S; SVV(CZ) 260334/2016 Institutional support: RVO:67985807 Keywords : Fourier analysis * kernel methods * particle filter Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.379, year: 2016
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Directory of Open Access Journals (Sweden)
Rongda Chen
Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
A heat kernel proof of the index theorem for deformation quantization
Karabegov, Alexander
2017-11-01
We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kähler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.
A heat kernel proof of the index theorem for deformation quantization
Karabegov, Alexander
2017-01-01
We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kahler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.
A multi-resolution approach to heat kernels on discrete surfaces
Vaxman, Amir; Ben-Chen, Mirela; Gotsman, Craig
2010-01-01
process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel
Free energy on a cycle graph and trigonometric deformation of heat kernel traces on odd spheres
Kan, Nahomi; Shiraishi, Kiyoshi
2018-01-01
We consider a possible ‘deformation’ of the trace of the heat kernel on odd dimensional spheres, motivated by the calculation of the free energy of a scalar field on a discretized circle. By using an expansion in terms of the modified Bessel functions, we obtain the values of the free energies after a suitable regularization.
Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric
2017-01-01
This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...
Directory of Open Access Journals (Sweden)
YU Wenhao
2015-01-01
Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.
DEFF Research Database (Denmark)
Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard
2012-01-01
We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...
Kernel and wavelet density estimators on manifolds and more general metric spaces
DEFF Research Database (Denmark)
Cleanthous, G.; Georgiadis, Athanasios; Kerkyacharian, G.
We consider the problem of estimating the density of observations taking values in classical or nonclassical spaces such as manifolds and more general metric spaces. Our setting is quite general but also sufficiently rich in allowing the development of smooth functional calculus with well localized...... spectral kernels, Besov regularity spaces, and wavelet type systems. Kernel and both linear and nonlinear wavelet density estimators are introduced and studied. Convergence rates for these estimators are established, which are analogous to the existing results in the classical setting of real...
Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling
Directory of Open Access Journals (Sweden)
Hyojin Lee
2015-01-01
Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.
International Nuclear Information System (INIS)
Uchida, Isao; Yamada, Yasuhiko; Yamashita, Takashi; Okigaki, Shigeyasu; Oyamada, Hiyoshimaru; Ito, Akira.
1995-01-01
In radiotherapy with radiopharmaceuticals, more accurate estimates of the three-dimensional (3-D) distribution of absorbed dose is important in specifying the activity to be administered to patients to deliver a prescribed absorbed dose to target volumes without exceeding the toxicity limit of normal tissues in the body. A calculation algorithm for the purpose has already been developed by the authors. An accurate 3-D distribution of absorbed dose based on the algorithm is given by convolution of the 3-D dose matrix for a unit cubic voxel containing unit cumulated activity, which is obtained by transforming a dose point kernel into a 3-D cubic dose matrix, with the 3-D cumulated activity distribution given by the same voxel size. However, beta-dose point kernels affecting accurate estimates of the 3-D absorbed dose distribution have been different among the investigators. The purpose of this study is to elucidate how different beta-dose point kernels in water influence on the estimates of the absorbed dose distribution due to the dose point kernel convolution method by the authors. Computer simulations were performed using the MIRD thyroid and lung phantoms under assumption of uniform activity distribution of 32 P. Using beta-dose point kernels derived from Monte Carlo simulations (EGS-4 or ACCEPT computer code), the differences among their point kernels gave little differences for the mean and maximum absorbed dose estimates for the MIRD phantoms used. In the estimates of mean and maximum absorbed doses calculated using different cubic voxel sizes (4x4x4 mm and 8x8x8 mm) for the MIRD thyroid phantom, the maximum absorbed doses for the 4x4x4 mm-voxel were estimated approximately 7% greater than the cases of the 8x8x8 mm-voxel. They were found in every beta-dose point kernel used in this study. On the other hand, the percentage difference of the mean absorbed doses in the both voxel sizes for each beta-dose point kernel was less than approximately 0.6%. (author)
Some results from a Mellin transform expansion for the heat Kernel
International Nuclear Information System (INIS)
Malbouisson, A.P.C.; Simao, F.R.A.; Camargo Filho, A.F. de.
1988-01-01
The coefficients of a new Heat Kernel expansion, in the case of a differential operator containing a gauge field. The meromorphic structure of the generalized zeta-function obtained by that expansion is compared with the one obtained in a proceeding paper. The expansion is applied to anomalies, obtaining a general formula for arbitrary dimension D. The special cases D=2 and D=3 are investigated. (author) [pt
Calculation of heat-kernel coefficients and usage of computer algebra
International Nuclear Information System (INIS)
Bel'kov, A.A.; Lanev, A.V.; Schaale, A.
1995-01-01
The calculation of heat-kernel coefficients with the classical De Witt algorithm has been discussed. We present the explicit form of the coefficients up to h 5 in the general case and up to h 7 min for the minimal parts. The results are compared with the expressions in other papers. A method to optimize the usage of memory for working with large expressions on universal computer algebra systems has been proposed. 20 refs
One loop partition function of six dimensional conformal gravity using heat kernel on AdS
Energy Technology Data Exchange (ETDEWEB)
Lovreković, Iva [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstrasse 8-10/136, A-1040 Vienna (Austria)
2016-10-13
We compute the heat kernel for the Laplacians of symmetric transverse traceless fields of arbitrary spin on the AdS background in even number of dimensions using the group theoretic approach introduced in http://dx.doi.org/10.1007/JHEP11(2011)010 and apply it on the partition function of six dimensional conformal gravity. The obtained partition function consists of the Einstein gravity, conformal ghost and two modes that contain mass.
Regularized Pre-image Estimation for Kernel PCA De-noising
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
The main challenge in de-noising by kernel Principal Component Analysis (PCA) is the mapping of de-noised feature space points back into input space, also referred to as “the pre-image problem”. Since the feature space mapping is typically not bijective, pre-image estimation is inherently illposed...
Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.
Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M
2015-05-01
Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology.
Oskoueian, Ehsan; Abdullah, Norhani; Idrus, Zulkifli; Ebrahimi, Mahdi; Goh, Yong Meng; Shakeri, Majid; Oskoueian, Armin
2014-10-02
Palm kernel cake (PKC), the most abundant by-product of oil palm industry is believed to contain bioactive compounds with hepatoprotective potential. These compounds may serve as hepatoprotective agents which could help the poultry industry to alleviate adverse effects of heat stress on liver function in chickens. This study was performed to evaluate the hepatoprotective potential of PKC extract in heat-induced oxidative stress in chicken hepatocytes. The nature of the active metabolites and elucidation of the possible mechanism involved were also investigated. The PKC extract possessed free radical scavenging activity with values significantly (p < 0.05) lower than silymarin as the reference antioxidant. Heat-induced oxidative stress in chicken hepatocyte impaired the total protein, lipid peroxidation and antioxidant enzymes activity significantly (p < 0.05). Treatment of heat-induced hepatocytes with PKC extract (125 μg/ml) and silymarin as positive control increased these values significantly (p < 0.05). The real time PCR and western blot analyses revealed the significant (p < 0.05) up-regulation of oxidative stress biomarkers including TNF-like, IFN-γ and IL-1β genes; NF-κB, COX-2, iNOS and Hsp70 proteins expression upon heat stress in chicken hepatocytes. The PKC extract and silymarin were able to alleviate the expression of all of these biomarkers in heat-induced chicken hepatocytes. The gas chromatography-mass spectrometry analysis of PKC extract showed the presence of fatty acids, phenolic compounds, sugar derivatives and other organic compounds such as furfural which could be responsible for the observed hepatoprotective activity. Palm kernel cake extract could be a potential agent to protect hepatocytes function under heat induced oxidative stress.
Heat damage and in vitro starch digestibility of puffed wheat kernels.
Cattaneo, Stefano; Hidalgo, Alyssa; Masotti, Fabio; Stuknytė, Milda; Brandolini, Andrea; De Noni, Ivano
2015-12-01
The effect of processing conditions on heat damage, starch digestibility, release of advanced glycation end products (AGEs) and antioxidant capacity of puffed cereals was studied. The determination of several markers arising from Maillard reaction proved pyrraline (PYR) and hydroxymethylfurfural (HMF) as the most reliable indices of heat load applied during puffing. The considerable heat load was evidenced by the high levels of both PYR (57.6-153.4 mg kg(-1) dry matter) and HMF (13-51.2 mg kg(-1) dry matter). For cost and simplicity, HMF looked like the most appropriate index in puffed cereals. Puffing influenced starch in vitro digestibility, being most of the starch (81-93%) hydrolyzed to maltotriose, maltose and glucose whereas only limited amounts of AGEs were released. The relevant antioxidant capacity revealed by digested puffed kernels can be ascribed to both the new formed Maillard reaction products and the conditions adopted during in vitro digestion. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Haorui Liu
2016-01-01
Full Text Available In the car control systems, it is hard to measure some key vehicle states directly and accurately when running on the road and the cost of the measurement is high as well. To address these problems, a vehicle state estimation method based on the kernel principal component analysis and the improved Elman neural network is proposed. Combining with nonlinear vehicle model of three degrees of freedom (3 DOF, longitudinal, lateral, and yaw motion, this paper applies the method to the soft sensor of the vehicle states. The simulation results of the double lane change tested by Matlab/SIMULINK cosimulation prove the KPCA-IENN algorithm (kernel principal component algorithm and improved Elman neural network to be quick and precise when tracking the vehicle states within the nonlinear area. This algorithm method can meet the software performance requirements of the vehicle states estimation in precision, tracking speed, noise suppression, and other aspects.
Automated voxelization of 3D atom probe data through kernel density estimation
International Nuclear Information System (INIS)
Srinivasan, Srikant; Kaluskar, Kaustubh; Dumpala, Santoshrupa; Broderick, Scott; Rajan, Krishna
2015-01-01
Identifying nanoscale chemical features from atom probe tomography (APT) data routinely involves adjustment of voxel size as an input parameter, through visual supervision, making the final outcome user dependent, reliant on heuristic knowledge and potentially prone to error. This work utilizes Kernel density estimators to select an optimal voxel size in an unsupervised manner to perform feature selection, in particular targeting resolution of interfacial features and chemistries. The capability of this approach is demonstrated through analysis of the γ / γ’ interface in a Ni–Al–Cr superalloy. - Highlights: • Develop approach for standardizing aspects of atom probe reconstruction. • Use Kernel density estimators to select optimal voxel sizes in an unsupervised manner. • Perform interfacial analysis of Ni–Al–Cr superalloy, using new automated approach. • Optimize voxel size to preserve the feature of interest and minimizing loss / noise.
Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K
2017-10-17
Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Estimation of the applicability domain of kernel-based machine learning models for virtual screening
Directory of Open Access Journals (Sweden)
Fechner Nikolas
2010-03-01
Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening
Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas
2010-03-11
The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations
Directory of Open Access Journals (Sweden)
A. Dauda
2017-02-01
Full Text Available This study investigated the effect of moisture content on the physical properties and specific heat capacity of Neem (Azadirachta Indica A. Juss nut kernels. The major, intermediate and minor axial dimensions of the kernels increased from 1.04 to 1.23cm, 0.42 to 0.6cm, and 0.32 to 0.45cm respectively, as the moisture content increased from 5.2 to 44.9 % (db. The arithmetic and geometric mean diameters determined at the same moisture level were significantly different from each other, with the arithmetic mean diameter being higher. In the above moisture range, one thousand kernel weight, true density, porosity, sphericity, roundness and surface area all increased linearly from 0.0987 to 0.1755kg, 632 to 733kgm-3, 6.42 to 32.14%, 41.3 to 47.5%, 22 to 36% and 13 to 24cm2 respectively, while bulk density decreased from 591.4 to 497.4kgm-3 with increase in moisture content. Angle of repose increased from 21.22 to 29.8o with increase in moisture content. The Static coefficient of friction on ply wood with grains parallel to the direction of movement ranged from 0.41 to 0.61, it ranged from 0.19 to 0.24 on on fiber glass, 0.28 to .038 on hessian bag material and 0.25 to 0.33 on galvanized steel sheet. The specific heat of the seed varied from 2738.1- 4345.4J/kg/oC in the above moisture range.
Wang, Gang; Wang, Yalin
2017-02-15
In this paper, we propose a heat kernel based regional shape descriptor that may be capable of better exploiting volumetric morphological information than other available methods, thereby improving statistical power on brain magnetic resonance imaging (MRI) analysis. The mechanism of our analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral meshes. In order to capture profound brain grey matter shape changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between white-grey matter and CSF-grey matter boundary surfaces by computing the streamlines in a tetrahedral mesh. Secondly, we propose multi-scale grey matter morphology signatures to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the grey matter morphology signatures and generate the internal structure features. With the sparse linear discriminant analysis, we select a concise morphology feature set with improved classification accuracies. In our experiments, the proposed work outperformed the cortical thickness features computed by FreeSurfer software in the classification of Alzheimer's disease and its prodromal stage, i.e., mild cognitive impairment, on publicly available data from the Alzheimer's Disease Neuroimaging Initiative. The multi-scale and physics based volumetric structure feature may bring stronger statistical power than some traditional methods for MRI-based grey matter morphology analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Calculation of the time resolution of the J-PET tomograph using kernel density estimation
Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
2017-06-01
In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.
A New Entropy Formula and Gradient Estimates for the Linear Heat Equation on Static Manifold
Directory of Open Access Journals (Sweden)
Abimbola Abolarinwa
2014-08-01
Full Text Available In this paper we prove a new monotonicity formula for the heat equation via a generalized family of entropy functionals. This family of entropy formulas generalizes both Perelman’s entropy for evolving metric and Ni’s entropy on static manifold. We show that this entropy satisfies a pointwise differential inequality for heat kernel. The consequences of which are various gradient and Harnack estimates for all positive solutions to the heat equation on compact manifold.
Directory of Open Access Journals (Sweden)
Shanshan Yang
Full Text Available Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD, and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS and kernel principal component analysis (KPCA methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP decision rule and support vector machine (SVM with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified.
Using kernel density estimates to investigate lymphatic filariasis in northeast Brazil
Medeiros, Zulma; Bonfim, Cristine; Brandão, Eduardo; Netto, Maria José Evangelista; Vasconcellos, Lucia; Ribeiro, Liany; Portugal, José Luiz
2012-01-01
After more than 10 years of the Global Program to Eliminate Lymphatic Filariasis (GPELF) in Brazil, advances have been seen, but the endemic disease persists as a public health problem. The aim of this study was to describe the spatial distribution of lymphatic filariasis in the municipality of Jaboatão dos Guararapes, Pernambuco, Brazil. An epidemiological survey was conducted in the municipality, and positive filariasis cases identified in this survey were georeferenced in point form, using the GPS. A kernel intensity estimator was applied to identify clusters with greater intensity of cases. We examined 23 673 individuals and 323 individuals with microfilaremia were identified, representing a mean prevalence rate of 1.4%. Around 88% of the districts surveyed presented cases of filarial infection, with prevalences of 0–5.6%. The male population was more affected by the infection, with 63.8% of the cases (P<0.005). Positive cases were found in all age groups examined. The kernel intensity estimator identified the areas of greatest intensity and least intensity of filarial infection cases. The case distribution was heterogeneous across the municipality. The kernel estimator identified spatial clusters of cases, thus indicating locations with greater intensity of transmission. The main advantage of this type of analysis lies in its ability to rapidly and easily show areas with the highest concentration of cases, thereby contributing towards planning, monitoring, and surveillance of filariasis elimination actions. Incorporation of geoprocessing and spatial analysis techniques constitutes an important tool for use within the GPELF. PMID:22943547
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Directory of Open Access Journals (Sweden)
Samir Saoudi
2008-07-01
Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Heat kernel expansion for fermionic billiards in an external magnetic field
International Nuclear Information System (INIS)
Antoine, M.; Comtet, A.; Knecht, M.
1989-05-01
Using Seeley's heat kernel expansion, we compute the asymptotic density of states of the Dirac operator coupled to a magnetic field on a two dimensional manifold with boundary (fermionic billiard). Local boundary conditions compatible with vector current conservation depend on a free parameter α. It is shown that the perimeter correction identically vanishes for α = 0. In that case, the next order constant term is found to be proportional to the Euler characteristic of the manifold. These results are independent of the external magnetic field and of the shape of the billiard, provided the boundary is sufficiently smooth. For the flat circular billiard, the constant term is found to be - 1/12, in agreement with a numerical result by M.V. BERRY and R.J. MONDRAGON (1987)
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
International Nuclear Information System (INIS)
Trapero, Juan R.
2016-01-01
In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.
Hrycik, Janelle M.; Chassé, Joël; Ruddick, Barry R.; Taggart, Christopher T.
2013-11-01
Early life-stage dispersal influences recruitment and is of significance in explaining the distribution and connectivity of marine species. Motivations for quantifying dispersal range from biodiversity conservation to the design of marine reserves and the mitigation of species invasions. Here we compare estimates of real particle dispersion in a coastal marine environment with similar estimates provided by hydrodynamic modelling. We do so by using a system of magnetically attractive particles (MAPs) and a magnetic-collector array that provides measures of Lagrangian dispersion based on the time-integration of MAPs dispersing through the array. MAPs released as a point source in a coastal marine location dispersed through the collector array over a 5-7 d period. A virtual release and observed (real-time) environmental conditions were used in a high-resolution three-dimensional hydrodynamic model to estimate the dispersal of virtual particles (VPs). The number of MAPs captured throughout the collector array and the number of VPs that passed through each corresponding model location were enumerated and compared. Although VP dispersal reflected several aspects of the observed MAP dispersal, the comparisons demonstrated model sensitivity to the small-scale (random-walk) particle diffusivity parameter (Kp). The one-dimensional dispersal kernel for the MAPs had an e-folding scale estimate in the range of 5.19-11.44 km, while those from the model simulations were comparable at 1.89-6.52 km, and also demonstrated sensitivity to Kp. Variations among comparisons are related to the value of Kp used in modelling and are postulated to be related to MAP losses from the water column and (or) shear dispersion acting on the MAPs; a process that is constrained in the model. Our demonstration indicates a promising new way of 1) quantitatively and empirically estimating the dispersal kernel in aquatic systems, and 2) quantitatively assessing and (or) improving regional hydrodynamic
Kernel PLS Estimation of Single-trial Event-related Potentials
Rosipal, Roman; Trejo, Leonard J.
2004-01-01
Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.
Quantum Einstein gravity. Advancements of heat kernel-based renormalization group studies
Energy Technology Data Exchange (ETDEWEB)
Groh, Kai
2012-10-15
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory. As its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained. The constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point. Finally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement
Quantum Einstein gravity. Advancements of heat kernel-based renormalization group studies
International Nuclear Information System (INIS)
Groh, Kai
2012-10-01
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory. As its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained. The constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point. Finally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement of
Kernel density estimation-based real-time prediction for respiratory motion
International Nuclear Information System (INIS)
Ruan, Dan
2010-01-01
Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the
Kernel density estimation and transition maps of Moldavian Neolithic and Eneolithic settlement
Directory of Open Access Journals (Sweden)
Robin Brigand
2018-04-01
Full Text Available The data presented in this article are related to the research article entitled “Neo-Eneolithic settlement pattern and salt exploitation in Romanian Moldavia” (Brigand and Weller, 2018 [1]. Kernel density estimation (KDE is used in order to move beyond the discrete distribution of sites and to enable us to work on a continuous surface that reflects the intensity of the occupation in the space. Maps of density per period – Neolithic I (Cris, Neolithic II (LBK, Eneolithic I (Precucuteni, Eneolithic II (Cucuteni A, Eneolithic III-IV (Cucuteni A-B and B – are used to create maps of density difference (Figs. 1–4 in order to analyse the dynamic (either non-existent, negative or positive between two chronological sequences.
Supersymmetry of noncompact MQCD-like membrane instantons and heat kernel asymptotics
International Nuclear Information System (INIS)
Belani, Kanishka; Kaura, Payal; Misra, Aalok
2006-01-01
We perform a heat kernel asymptotics analysis of the nonperturbative superpotential obtained from wrapping of an M2-brane around a supersymmetric noncompact three-fold embedded in a (noncompact) G 2 -manifold as obtained, the three-fold being the one relevant to domain walls in Witten's MQCD, in the limit of small 'ζ', a complex constant that appears in the Riemann surfaces relevant to defining the boundary conditions for the domain wall in MQCD. The MQCD-like configuration is interpretable, for small but non-zero ζ as a noncompact/'large open membrane instanton, and for vanishing ζ, as the type IIA D0-brane (for vanishing M-theory circle radius). We find that the eta-function Seeley de-Witt coefficients vanish, and we get a perfect match between the zeta-function Seeley de-Witt coefficients (up to terms quadratic in ζ) between the Dirac-type operator and one of the two Laplace-type operators figuring in the superpotential. Given the dissimilar forms of the bosonic and the square of the fermionic operators, this is an extremely nontrivial check, from a spectral analysis point of view, of the expected residual supersymmetry for the nonperturbative configurations in M-theory considered in this work
Pencil kernel correction and residual error estimation for quality-index-based dose calculations
International Nuclear Information System (INIS)
Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael
2006-01-01
Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method
Network Kernel Density Estimation for the Analysis of Facility POI Hotspots
Directory of Open Access Journals (Sweden)
YU Wenhao
2015-12-01
Full Text Available The distribution pattern of urban facility POIs (points of interest usually forms clusters (i.e. "hotspots" in urban geographic space. To detect such type of hotspot, the methods mostly employ spatial density estimation based on Euclidean distance, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. By using these methods, it is difficult to exactly and objectively delimitate the shape and the size of hotspot. Therefore, this research adopts the kernel density estimation based on the network distance to compute the density of hotspot and proposes a simple and efficient algorithm. The algorithm extends the 2D dilation operator to the 1D morphological operator, thus computing the density of network unit. Through evaluation experiment, it is suggested that the algorithm is more efficient and scalable than the existing algorithms. Based on the case study on real POI data, the range of hotspot can highlight the spatial characteristic of urban functions along traffic routes, in order to provide valuable spatial knowledge and information services for the applications of region planning, navigation and geographic information inquiring.
2009-01-01
Abstract A kernel estimator of the conditional quantile is defined for a scalar response variable given a covariate taking values in a semi-metric space. The approach generalizes the median?s L1-norm estimator. The almost complete consistency and asymptotic normality are stated. correspondance: Corresponding author. Tel: +33 320 964 933; fax: +33 320 964 704. (Lemdani, Mohamed) (Laksaci, Ali) mohamed.lemdani@univ-lill...
DEFF Research Database (Denmark)
Varneskov, Rasmus T.
. Lastly, two small empirical applications to high frequency stock market data illustrate the bias reduction relative to competing estimators in estimating correlations, realized betas, and mean-variance frontiers, as well as the use of the new estimators in the dynamics of hedging....... problems. These transformations are all shown to inherit the desirable asymptotic properties of the generalized at-top realized kernels. A simulation study shows that the class of estimators has a superior finite sample tradeoff between bias and root mean squared error relative to competing estimators...
International Nuclear Information System (INIS)
Li Heng; Mohan, Radhe; Zhu, X Ronald
2008-01-01
The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.
Irvine, Michael A; Hollingsworth, T Déirdre
2018-05-26
Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Kropat, Georg, E-mail: georg.kropat@chuv.ch [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Bochud, Francois [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Jaboyedoff, Michel [Faculty of Geosciences and Environment, University of Lausanne, GEOPOLIS — 3793, 1015 Lausanne (Switzerland); Laedermann, Jean-Pascal [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Murith, Christophe; Palacios, Martha [Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland); Baechler, Sébastien [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland)
2015-02-01
Purpose: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. Methods: We looked at about 240 000 IRC measurements carried out in about 150 000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m{sup 3}. Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. Results: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. Conclusions: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements
What is hypomania? Tetrachoric factor analysis and kernel estimation of DSM-IV hypomanic symptoms.
Benazzi, Franco
2009-11-01
The DSM-IV definition of hypomania, which relies on clinical consensus and historical tradition, includes several "nonspecific" symptoms. The aim of this study was to identify the core symptoms of DSM-IV hypomania. In an outpatient private practice, 266 bipolar II disorder (BP-II) and 138 major depressive disorder (MDD) remitted patients were interviewed by a bipolar-trained psychiatrist, for different study goals. Patients were questioned, using the Structured Clinical Interview for DSM-IV, about the most common symptoms and duration of recent threshold and subthreshold hypomanic episodes. Data were recorded between 2002 and 2006. Four different samples, assessed with the same methodology, were pooled for the present analyses. Tetrachoric factor analysis was used to identify core hypomanic symptoms. Distribution of symptoms by kernel estimation was inspected for bimodality. Validity of core hypomania was tested by receiver operating characteristic (ROC) analysis. The distribution of subthreshold and threshold hypomanic episodes did not show bimodality. Tetrachoric factor analysis found 2 uncorrelated factors: factor 1 included the "classic" symptoms elevated mood, inflated self-esteem, decreased need for sleep, talkativeness, and increase in goal-directed activity (overactivity); factor 2 included the "nonspecific" symptoms irritable mood, racing/crowded thoughts, and distractibility. Factor 1 discriminatory accuracy for distinguishing BP-II versus MDD was high (ROC area = 0.94). The distribution of the 5-symptom episodes of factor 1 showed clear-cut bimodality. Similar results were found for episodes limited to 3 behavioral symptoms of factor 1 (decreased need for sleep, talkativeness, and overactivity) and 4 behavioral symptoms of factor 1 (adding elevated mood), with high discriminatory accuracy. A core, categorical DSM-IV hypomania was found that included 3 to 5 symptoms, ie, behavioral symptoms and elevated mood. Behavioral symptoms (overactivity domain
Influence Function and Robust Variant of Kernel Canonical Correlation Analysis
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2017-01-01
Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...
Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar
2014-01-01
Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson’s disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher’s linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406
International Nuclear Information System (INIS)
Parker, Leonard; Vanzella, Daniel A.T.
2004-01-01
We investigate the possibility that the late acceleration observed in the rate of expansion of the Universe is due to vacuum quantum effects arising in curved spacetime. The theoretical basis of the vacuum cold dark matter (VCDM), or vacuum metamorphosis, cosmological model of Parker and Raval is reexamined and improved. We show, by means of a manifestly nonperturbative approach, how the infrared behavior of the propagator (related to the large-time asymptotic form of the heat kernel) of a free scalar field in curved spacetime leads to nonperturbative terms in the effective action similar to those appearing in the earlier version of the VCDM model. The asymptotic form that we adopt for the propagator or heat kernel at large proper time s is motivated by, and consistent with, particular cases where the heat kernel has been calculated exactly, namely in de Sitter spacetime, in the Einstein static universe, and in the linearly expanding spatially flat Friedmann-Robertson-Walker (FRW) universe. This large-s asymptotic form generalizes somewhat the one suggested by the Gaussian approximation and the R-summed form of the propagator that earlier served as a theoretical basis for the VCDM model. The vacuum expectation value for the energy-momentum tensor of the free scalar field, obtained through variation of the effective action, exhibits a resonance effect when the scalar curvature R of the spacetime reaches a particular value related to the mass of the field. Modeling our Universe by an FRW spacetime filled with classical matter and radiation, we show that the back reaction caused by this resonance drives the Universe through a transition to an accelerating expansion phase, very much in the same way as originally proposed by Parker and Raval. Our analysis includes higher derivatives that were neglected in the earlier analysis, and takes into account the possible runaway solutions that can follow from these higher-derivative terms. We find that the runaway solutions do
A shortest-path graph kernel for estimating gene product semantic similarity
Directory of Open Access Journals (Sweden)
Alvarez Marco A
2011-07-01
Full Text Available Abstract Background Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. Results We present a shortest-path graph kernel (spgk method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Conclusions Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Directory of Open Access Journals (Sweden)
Delong Feng
2016-05-01
Full Text Available Remaining useful life estimation of the prognostics and health management technique is a complicated and difficult research question for maintenance. In this article, we consider the problem of prognostics modeling and estimation of the turbofan engine under complicated circumstances and propose a kernel principal component analysis–based degradation model and remaining useful life estimation method for such aircraft engine. We first analyze the output data created by the turbofan engine thermodynamic simulation that is based on the kernel principal component analysis method and then distinguish the qualitative and quantitative relationships between the key factors. Next, we build a degradation model for the engine fault based on the following assumptions: the engine has only had constant failure (i.e. no sudden failure is included, and the engine has a Wiener process, which is a covariate stand for the engine system drift. To predict the remaining useful life of the turbofan engine, we built a health index based on the degradation model and used the method of maximum likelihood and the data from the thermodynamic simulation model to estimate the parameters of this degradation model. Through the data analysis, we obtained a trend model of the regression curve line that fits with the actual statistical data. Based on the predicted health index model and the data trend model, we estimate the remaining useful life of the aircraft engine as the index reaches zero. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this prediction method that we propose. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this proposed method, the precision of the method can reach to 98.9% and the average precision is 95.8%.
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry
2018-04-01
Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.
Directory of Open Access Journals (Sweden)
A.R Salari Kia
2014-04-01
Full Text Available Pistachio has a special ranking among Iranian agricultural products. Iran is known as the largest producer and exporter of pistachio in the world. Agricultural products are imposed under different thermal treatments during storage and processing. Designing all these processes requires thermal parameters of the products such as specific heat capacity. Regarding the importance of pistachio processing as an exportable product, in this study the specific heat capacity of nut and kernel of two varieties of Iranian pistachio (Kalle-Ghochi and Badami were investigated at four levels of moisture content (initial moisture content (5%, 15%, 25% and 40% w.b. and three levels of temperature (40, 50 and 60°C. In both varieties, the differences between the data were significant at the 1% of probability; however, the effect of moisture content was greater than that of temperature. The results indicated that the specific heat capacity of both nuts and kernels increase logarithmically with increase of moisture content and also increase linearly with increase of temperature. This parameter has altered for nut and kernel of Kalle-Ghochi and Badami varieties within the range of 1.039-2.936 kJ kg-1 K-1, 1.236-3.320 kJ kg-1 K-1, 0.887-2.773 kJ kg-1 K-1 and 0.811-2.914 kJ kg-1 K-1, respectively. Moreover, for any given level of temperature, the specific heat capacity of kernels was higher than that of nuts. Finally, regression models with high R2 values were developed to predict the specific heat capacity of pistachio varieties as a function of moisture content and temperature
Rafal Podlaski; Francis A. Roesch
2014-01-01
Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...
Choi, Sae Il
2009-01-01
This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…
Asymptotic normality of kernel estimator of $\\psi$-regression function for functional ergodic data
Laksaci ALI; Benziadi Fatima; Gheriballak Abdelkader
2016-01-01
In this paper we consider the problem of the estimation of the $\\psi$-regression function when the covariates take values in an infinite dimensional space. Our main aim is to establish, under a stationary ergodic process assumption, the asymptotic normality of this estimate.
Residual stresses estimation in tubes after rapid heating of surface
International Nuclear Information System (INIS)
Serikov, S.V.
1992-01-01
Results are presented on estimation of residual stresses in tubes of steel types ShKh15, EhP836 and 12KIMF after heating by burning pyrotechnic substance inside tubes. External tube surface was heated up to 400-450 deg C under such treatment. Axial stresses distribution over tube wall thickness was determined for initial state, after routine heat treatment and after heating with the use of fireworks. Inner surface heating was shown to essentially decrease axial stresses in tubes
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...
Estimating heat-to-heat variation from a statistician's point of view
International Nuclear Information System (INIS)
Hebble, T.L.
1976-01-01
Heat-to-heat variability is the change in results that occurs when the same tests under the same conditions are applied to samples from different heats of the same material. Heat-to-heat variability reflects, among other things, difference in chemistry and in processing history. Published Japanese tensile and creep tests on types 304 and 316 stainless steel tube are used to illustrate the analysis of variance technique as a tool for isolating heat-to-heat variation. The importance of the underlying model and the role of replication are indicated. Finally, confidence intervals and tolerance limits are computed from numerical estimates of heat-to-heat variation. 17 tables
DEFF Research Database (Denmark)
Varneskov, Rasmus T.
2014-01-01
-top estimators are shown to be consistent, asymptotically unbiased, and mixed Gaussian at the optimal rate of convergence, n1/4. Exact bound on lower order terms are obtained using maximal inequalities and these are used to derive a conservative, MSE-optimal flat-top shrinkage. Additionally, bounds...
Directory of Open Access Journals (Sweden)
Nesseim, TDT.
2017-01-01
Full Text Available Jatropha curcas is a tropical plant belonging to the Euphorbiaceae family whose cultivation has been largely promoted in recent years for the production of biofuels. The kernel of the seed contains approximately 55% lipid in dry matter and the meal obtained could be an exceptional source of proteins for family poultry farming, after treatments to remove toxic and anti-nutritional compounds. The ingestion and the growth performance of J. curcas kernel meal (JKM, obtained after partial physico-chemical de-oiling combined or not with heating was evaluated in broiler chickens and chicks. Sixty unsexed broiler chickens, 30 day-old, divided into three groups as well as twenty broiler chicks, 1 day-old, divided into two groups were used in two experiments. In experiment 1, jatropha kernel was de-oiled and incorporated into a control fattening feed at 40 and 80g/kg (diets 4JKM1 and 8JM1. In experiment 2, jatropha kernel meal obtained in experiment 1 was heat treated and incorporated into a growing diet at 80g/kg (diet 8JKM2. Daily dietary intakes as well as weight gain of the animals were affected by the incorporation of jatropha kernel meal in the ration. In experiment 1, average daily feed intake (ADFI1 of 139.2, 55.2 and 23.4g/day/animal and also average daily weight gain (ADWG1 of 61.9, 18.5 and -7.7g/animal were obtained respectively for the groups fed with diets 0JKM1, 4JKM1 and 8JKM1. In experiment 2, Average daily feed intake (ADFI2 of 18.7 and 3.1g/day/animal and also average daily weight gain (ADWG2 of 7.1 and 1.9g/animal were obtained respectively for the groups fed with diets 0JKM2 and 8JKM2. In both experiment, feed conversion ratio (FCR was also affected by the dietary treatments and the overall mortality rate showed an increase according to levels of jatropha kernel meal in diet.
Ha, Jae-Won; Kang, Dong-Hyun
2015-07-01
The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Anthropogenic heat flux estimation from space
Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean Philippe; Grimmond, C.S.B.; Feigenwinter, Christian; Lindberg, Fredrik; Frate, Del Fabio; Klostermann, Judith; Mitraka, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans
2016-01-01
H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the impacts
ANthropogenic heat FLUX estimation from Space
Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean Philippe; Grimmong, C.S.B.; Feigenwinter, Christian; Lindberg, Fredrik; Frate, Del Fabio; Klostermann, Judith; Mi, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans
2017-01-01
The H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the
Directory of Open Access Journals (Sweden)
Wenhao Yu
Full Text Available The urban facility, one of the most important service providers is usually represented by sets of points in GIS applications using POI (Point of Interest model associated with certain human social activities. The knowledge about distribution intensity and pattern of facility POIs is of great significance in spatial analysis, including urban planning, business location choosing and social recommendations. Kernel Density Estimation (KDE, an efficient spatial statistics tool for facilitating the processes above, plays an important role in spatial density evaluation, because KDE method considers the decay impact of services and allows the enrichment of the information from a very simple input scatter plot to a smooth output density surface. However, the traditional KDE is mainly based on the Euclidean distance, ignoring the fact that in urban street network the service function of POI is carried out over a network-constrained structure, rather than in a Euclidean continuous space. Aiming at this question, this study proposes a computational method of KDE on a network and adopts a new visualization method by using 3-D "wall" surface. Some real conditional factors are also taken into account in this study, such as traffic capacity, road direction and facility difference. In practical works the proposed method is implemented in real POI data in Shenzhen city, China to depict the distribution characteristic of services under impacts of multi-factors.
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-06-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
Famurewa, Ademola C; Nwankwo, Onyebuchi E; Folawiyo, Abiola M; Igwe, Emeka C; Epete, Michael A; Ufebe, Odomero G
2017-01-01
The literature reports that the health benefits of vegetable oil can be deteriorated by repeated heating, which leads to lipid oxidation and the formation of free radicals. Virgin coconut oil (VCO) is emerging as a functional food oil and its health benefits are attributed to its potent polyphenolic compounds. We investigated the beneficial effect of VCO supplementation on lipid profile, liver and kidney markers in rats fed repeatedly heated palm kernel oil (HPO). Rats were divided into four groups (n = 5). The control group rats were fed with a normal diet; group 2 rats were fed a 10% VCO supplemented diet; group 3 administered 10 ml HPO/kg b.w. orally; group 4 were fed 10% VCO + 10 ml HPO/kg for 28 days. Subsequently, serum markers of liver damage (ALT, AST, ALP and albumin), kidney damage (urea, creatinine and uric acid), lipid profile and lipid ratios as cardiovascular risk indices were evaluated. HPO induced a significant increase in serum markers of liver and kidney damage as well as con- comitant lipid abnormalities and a marked reduction in serum HDL-C. The lipid ratios evaluated for atherogenic and coronary risk indices in rats administered HPO only were remarkably higher than control. It was observed that VCO supplementation attenuated the biochemical alterations, including the indices of cardiovascular risks. VCO supplementation demonstrates beneficial health effects against HPO-induced biochemical alterations in rats. VCO may serve to modulate the adverse effects associated with consumption of repeatedly heated palm kernel oil.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger
2009-01-01
and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...
Chen, Y.; Ho, C.; Chang, L.
2011-12-01
In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the
Gradient estimates on the weighted p-Laplace heat equation
Wang, Lin Feng
2018-01-01
In this paper, by a regularization process we derive new gradient estimates for positive solutions to the weighted p-Laplace heat equation when the m-Bakry-Émery curvature is bounded from below by -K for some constant K ≥ 0. When the potential function is constant, which reduce to the gradient estimate established by Ni and Kotschwar for positive solutions to the p-Laplace heat equation on closed manifolds with nonnegative Ricci curvature if K ↘ 0, and reduce to the Davies, Hamilton and Li-Xu's gradient estimates for positive solutions to the heat equation on closed manifolds with Ricci curvature bounded from below if p = 2.
Estimating heat-to-heat variation in mechanical properties from a statistician's point of view
International Nuclear Information System (INIS)
Hebble, T.L.
1976-01-01
A statistical technique known as analysis of variance (ANOVA) is used to estimate the variance and standard deviation of differences among heats. The total variation of a collection of observations and how an ANOVA can be used to partition the total variation into its sources are discussed. Then, the ANOVA is adapted to published Japanese data indicating how to estimate heat-to-heat variation. Finally, numerical results are computed for several tensile and creep properties of Types 304 and 316 SS
Estimating the Heat of Formation of Foodstuffs and Biomass
Energy Technology Data Exchange (ETDEWEB)
Burnham, Alan K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2010-11-23
Calorie estimates for expressing the energy content of food are common, however they are inadequate for the purpose of estimating the chemically defined heat of formation of foodstuffs for two reasons. First, they assume utilization factors by the body.1,2,3 Second, they are usually based on average values for their components. The best way to solve this problem would be to measure the heat of combustion of each material of interest. The heat of formation can then be calculated from the elemental composition and the heats of formation of CO2, H2O, and SO2. However, heats of combustion are not always available. Sometimes elemental analysis only is available, or in other cases, a breakdown into protein, carbohydrates, and lipids. A simple way is needed to calculate the heat of formation from various sorts of data commonly available. This report presents improved correlations for relating the heats of combustion and formation to the elemental composition, moisture content, and ash content. The correlations are also able to calculate heats of combustion of carbohydrates, proteins, and lipids individually, including how they depend on elemental composition. The starting point for these correlations are relationships commonly used to estimate the heat of combustion of fossil fuels, and they have been modified slightly to agree better with the ranges of chemical structures found in foodstuffs and biomass.
Adaptive metric kernel regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
2000-01-01
Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...
Adaptive Metric Kernel Regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...
Wildemeersch, S; Jamin, P; Orban, P; Hermans, T; Klepikova, M; Nguyen, F; Brouyère, S; Dassargues, A
2014-11-15
Geothermal energy systems, closed or open, are increasingly considered for heating and/or cooling buildings. The efficiency of such systems depends on the thermal properties of the subsurface. Therefore, feasibility and impact studies performed prior to their installation should include a field characterization of thermal properties and a heat transfer model using parameter values measured in situ. However, there is a lack of in situ experiments and methodology for performing such a field characterization, especially for open systems. This study presents an in situ experiment designed for estimating heat transfer parameters in shallow alluvial aquifers with focus on the specific heat capacity. This experiment consists in simultaneously injecting hot water and a chemical tracer into the aquifer and monitoring the evolution of groundwater temperature and concentration in the recovery well (and possibly in other piezometers located down gradient). Temperature and concentrations are then used for estimating the specific heat capacity. The first method for estimating this parameter is based on a modeling in series of the chemical tracer and temperature breakthrough curves at the recovery well. The second method is based on an energy balance. The values of specific heat capacity estimated for both methods (2.30 and 2.54MJ/m(3)/K) for the experimental site in the alluvial aquifer of the Meuse River (Belgium) are almost identical and consistent with values found in the literature. Temperature breakthrough curves in other piezometers are not required for estimating the specific heat capacity. However, they highlight that heat transfer in the alluvial aquifer of the Meuse River is complex and contrasted with different dominant process depending on the depth leading to significant vertical heat exchange between upper and lower part of the aquifer. Furthermore, these temperature breakthrough curves could be included in the calibration of a complex heat transfer model for
Multivariate and semiparametric kernel regression
Härdle, Wolfgang; Müller, Marlene
1997-01-01
The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...
Estimating Nitrogen Availability of Heat-Dried Bio solids
International Nuclear Information System (INIS)
Cogger, C.G.; Bary, A.I.; Myhre, E.A.
2011-01-01
As heat-dried bio solids become more widely produced and marketed, it is important to improve estimates of N availability from these materials. Objectives were to compare plant-available N among three different heat-dried bio solids and determine if current guidelines were adequate for estimating application rates. Heat-dried bio solids were surface applied to tall fescue (Festuca arundinacea Schreb.) in Washington State, USA, and forage yield and N uptake measured for two growing seasons following application. Three rates of urea and a zero-N control were used to calculate N fertilizer efficiency regressions. Application year plant-available N (estimated as urea N equivalent) for two bio solids exceeded 60% of total N applied, while urea N equivalent for the third bio solids was 45%. Residual (second-year) urea N equivalent ranged from 5 to 10%. Guidelines for the Pacific Northwest USA recommend mineralization estimates of 35 to 40% for heat-dried bio solids, but this research shows that some heat-dried materials fall well above that range.
Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm
African Journals Online (AJOL)
In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...
Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...
African Journals Online (AJOL)
This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...
African Journals Online (AJOL)
This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Directory of Open Access Journals (Sweden)
Zhi-Sai Ma
2017-01-01
Full Text Available Modal parameter estimation plays an important role in vibration-based damage detection and is worth more attention and investigation, as changes in modal parameters are usually being used as damage indicators. This paper focuses on the problem of output-only modal parameter recursive estimation of time-varying structures based upon parameterized representations of the time-dependent autoregressive moving average (TARMA. A kernel ridge regression functional series TARMA (FS-TARMA recursive identification scheme is proposed and subsequently employed for the modal parameter estimation of a numerical three-degree-of-freedom time-varying structural system and a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudolinear regression FS-TARMA approach via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics in a recursive manner.
International Nuclear Information System (INIS)
Huang, C.-H.; Wu, H.-H.
2006-01-01
In the present study an inverse hyperbolic heat conduction problem is solved by the conjugate gradient method (CGM) in estimating the unknown boundary heat flux based on the boundary temperature measurements. Results obtained in this inverse problem will be justified based on the numerical experiments where three different heat flux distributions are to be determined. Results show that the inverse solutions can always be obtained with any arbitrary initial guesses of the boundary heat flux. Moreover, the drawbacks of the previous study for this similar inverse problem, such as (1) the inverse solution has phase error and (2) the inverse solution is sensitive to measurement error, can be avoided in the present algorithm. Finally, it is concluded that accurate boundary heat flux can be estimated in this study
International Nuclear Information System (INIS)
Boehlke, S.; Niegoth, H.
2012-01-01
In the nuclear power plant Leibstadt (KKL) during the next year large components will be dismantled and stored for final disposal within the interim storage facility ZENT at the NPP site. Before construction of ZENT appropriate estimations of the local dose rate inside and outside the building and the collective dose for the normal operation have to be performed. The shielding calculations are based on the properties of the stored components and radiation sources and on the concepts for working place requirements. The installation of control and monitoring areas will depend on these calculations. For the determination of the shielding potential of concrete walls and steel doors with the defined boundary conditions point-kernel codes like MICROSHIELd registered are used. Complex problems cannot be modeled with this code. Therefore the point-kernel code VISIPLAN registered was developed for the determination of the local dose distribution functions in 3D models. The possibility of motion sequence inputs allows an optimization of collective dose estimations for the operational phases of a nuclear facility.
Estimation of bulk transfer coefficient for latent heat flux (Ce)
Digital Repository Service at National Institute of Oceanography (India)
Sadhuram, Y.
The bulk transfer coefficient for latent heat flux (Ce) has been estimated over the Arabian Sea from the moisture budget during the pre-monsoon season of 1988. The computations have been made over two regions (A: 0-8 degrees N: 60-68 degrees E: B: 0...
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning
Simple future weather files for estimating heating and cooling demand
DEFF Research Database (Denmark)
Cox, Rimante Andrasiunaite; Drews, Martin; Rode, Carsten
2015-01-01
useful estimates of future energy demand of a building. Experimental results based on both the degree-day method and dynamic simulations suggest that this is indeed the case. Specifically, heating demand estimates were found to be within a few per cent of one another, while estimates of cooling demand...... were slightly more varied. This variation was primarily due to the very few hours of cooling that were required in the region examined. Errors were found to be most likely when the air temperatures were close to the heating or cooling balance points, where the energy demand was modest and even...... relatively large errors might thus result in only modest absolute errors in energy demand....
Recov'Heat: An estimation tool of urban waste heat recovery potential in sustainable cities
Goumba, Alain; Chiche, Samuel; Guo, Xiaofeng; Colombert, Morgane; Bonneau, Patricia
2017-02-01
Waste heat recovery is considered as an efficient way to increase carbon-free green energy utilization and to reduce greenhouse gas emission. Especially in urban area, several sources such as sewage water, industrial process, waste incinerator plants, etc., are still rarely explored. Their integration into a district heating system providing heating and/or domestic hot water could be beneficial for both energy companies and local governments. EFFICACITY, a French research institute focused on urban energy transition, has developed an estimation tool for different waste heat sources potentially explored in a sustainable city. This article presents the development method of such a decision making tool which, by giving both energetic and economic analysis, helps local communities and energy service companies to make preliminary studies in heat recovery projects.
Panel data specifications in nonparametric kernel regression
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard; Henningsen, Arne
parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...
Heat experiment design to estimate temperature dependent thermal properties
International Nuclear Information System (INIS)
Romanovski, M
2008-01-01
Experimental conditions are studied to optimize transient experiments for estimating temperature dependent thermal conductivity and volumetric heat capacity. A mathematical model of a specimen is the one-dimensional heat equation with boundary conditions of the second kind. Thermal properties are assumed to vary nonlinearly with temperature. Experimental conditions refer to the thermal loading scheme, sampling times and sensor location. A numerical model of experimental configurations is studied to elicit the optimal conditions. The numerical solution of the design problem is formulated on a regularization scheme with a stabilizer minimization without a regularization parameter. An explicit design criterion is used to reveal the optimal sensor location, heating duration and flux magnitude. Results obtained indicate that even the strongly nonlinear experimental design problem admits the aggregation of its solution and has a strictly defined optimal measurement scheme. Additional region of temperature measurements with allowable identification error is revealed.
Regularization and error estimates for nonhomogeneous backward heat problems
Directory of Open Access Journals (Sweden)
Duc Trong Dang
2006-01-01
Full Text Available In this article, we study the inverse time problem for the non-homogeneous heat equation which is a severely ill-posed problem. We regularize this problem using the quasi-reversibility method and then obtain error estimates on the approximate solutions. Solutions are calculated by the contraction principle and shown in numerical experiments. We obtain also rates of convergence to the exact solution.
Spectral estimates of net radiation and soil heat flux
International Nuclear Information System (INIS)
Daughtry, C.S.T.; Kustas, W.P.; Moran, M.S.; Pinter, P.J. Jr.; Jackson, R.D.; Brown, P.W.; Nichols, W.D.; Gay, L.W.
1990-01-01
Conventional methods of measuring surface energy balance are point measurements and represent only a small area. Remote sensing offers a potential means of measuring outgoing fluxes over large areas at the spatial resolution of the sensor. The objective of this study was to estimate net radiation (Rn) and soil heat flux (G) using remotely sensed multispectral data acquired from an aircraft over large agricultural fields. Ground-based instruments measured Rn and G at nine locations along the flight lines. Incoming fluxes were also measured by ground-based instruments. Outgoing fluxes were estimated using remotely sensed data. Remote Rn, estimated as the algebraic sum of incoming and outgoing fluxes, slightly underestimated Rn measured by the ground-based net radiometers. The mean absolute errors for remote Rn minus measured Rn were less than 7%. Remote G, estimated as a function of a spectral vegetation index and remote Rn, slightly overestimated measured G; however, the mean absolute error for remote G was 13%. Some of the differences between measured and remote values of Rn and G are associated with differences in instrument designs and measurement techniques. The root mean square error for available energy (Rn - G) was 12%. Thus, methods using both ground-based and remotely sensed data can provide reliable estimates of the available energy which can be partitioned into sensible and latent heat under non advective conditions
Kang, Youngok; Cho, Nahye; Son, Serin
2018-01-01
The purpose of this study is to analyze how the spatiotemporal characteristics of traffic accidents involving the elderly population in Seoul are changing by time period. We applied kernel density estimation and hotspot analyses to analyze the spatial characteristics of elderly people's traffic accidents, and the space-time cube, emerging hotspot, and space-time kernel density estimation analyses to analyze the spatiotemporal characteristics. In addition, we analyzed elderly people's traffic accidents by dividing cases into those in which the drivers were elderly people and those in which elderly people were victims of traffic accidents, and used the traffic accidents data in Seoul for 2013 for analysis. The main findings were as follows: (1) the hotspots for elderly people's traffic accidents differed according to whether they were drivers or victims. (2) The hourly analysis showed that the hotspots for elderly drivers' traffic accidents are in specific areas north of the Han River during the period from morning to afternoon, whereas the hotspots for elderly victims are distributed over a wide area from daytime to evening. (3) Monthly analysis showed that the hotspots are weak during winter and summer, whereas they are strong in the hiking and climbing areas in Seoul during spring and fall. Further, elderly victims' hotspots are more sporadic than elderly drivers' hotspots. (4) The analysis for the entire period of 2013 indicates that traffic accidents involving elderly people are increasing in specific areas on the north side of the Han River. We expect the results of this study to aid in reducing the number of traffic accidents involving elderly people in the future.
Cho, Nahye; Son, Serin
2018-01-01
The purpose of this study is to analyze how the spatiotemporal characteristics of traffic accidents involving the elderly population in Seoul are changing by time period. We applied kernel density estimation and hotspot analyses to analyze the spatial characteristics of elderly people’s traffic accidents, and the space-time cube, emerging hotspot, and space-time kernel density estimation analyses to analyze the spatiotemporal characteristics. In addition, we analyzed elderly people’s traffic accidents by dividing cases into those in which the drivers were elderly people and those in which elderly people were victims of traffic accidents, and used the traffic accidents data in Seoul for 2013 for analysis. The main findings were as follows: (1) the hotspots for elderly people’s traffic accidents differed according to whether they were drivers or victims. (2) The hourly analysis showed that the hotspots for elderly drivers’ traffic accidents are in specific areas north of the Han River during the period from morning to afternoon, whereas the hotspots for elderly victims are distributed over a wide area from daytime to evening. (3) Monthly analysis showed that the hotspots are weak during winter and summer, whereas they are strong in the hiking and climbing areas in Seoul during spring and fall. Further, elderly victims’ hotspots are more sporadic than elderly drivers’ hotspots. (4) The analysis for the entire period of 2013 indicates that traffic accidents involving elderly people are increasing in specific areas on the north side of the Han River. We expect the results of this study to aid in reducing the number of traffic accidents involving elderly people in the future. PMID:29768453
Estimation of heat transfer and heat source in a molten pool
Energy Technology Data Exchange (ETDEWEB)
Yun, J.I.; Suh, K.Y.; Kang, C.S. [Seoul National Univ., Dept. of Nuclear Engineering (Korea, Republic of)
2001-07-01
Heat transfer and fluid flow in a molten pool are influenced by internal volumetric heat generated from the radioactive decay of fission product species retained in the pool. The pool superheat is determined based on the overall energy balance that equates the heat production rate to the heat loss rate. Decay heat of fission products in the pool was estimated by product of the mass concentration and energy conversion factor of each fission product. For the calculation of heat generation rate in the pool, twenty-nine (29) elements were chosen and classified by their chemical properties. The mass concentration of a fission product is obtained from released fraction and the tabular output of the ORIGEN 2 code. The initial core and pool inventories at each time can also be estimated using ORIGEN 2. The released fraction of each fission product is calculated based on the bubble dynamics and mass transport. Numerical analysis was performed for the TMI-2 accident. The pool is assumed to be a partially filled hemispherical geometry, 1.45 m in radius and 32,700 kg in mass. The change of pool geometry during the numerical calculation was neglected. The peak temperature sizably decreased by about 60 K as the fission products were released from the pool. (author)
Estimation of heat transfer and heat source in a molten pool
International Nuclear Information System (INIS)
Yun, J.I.; Suh, K.Y.; Kang, C.S.
2001-01-01
Heat transfer and fluid flow in a molten pool are influenced by internal volumetric heat generated from the radioactive decay of fission product species retained in the pool. The pool superheat is determined based on the overall energy balance that equates the heat production rate to the heat loss rate. Decay heat of fission products in the pool was estimated by product of the mass concentration and energy conversion factor of each fission product. For the calculation of heat generation rate in the pool, twenty-nine (29) elements were chosen and classified by their chemical properties. The mass concentration of a fission product is obtained from released fraction and the tabular output of the ORIGEN 2 code. The initial core and pool inventories at each time can also be estimated using ORIGEN 2. The released fraction of each fission product is calculated based on the bubble dynamics and mass transport. Numerical analysis was performed for the TMI-2 accident. The pool is assumed to be a partially filled hemispherical geometry, 1.45 m in radius and 32,700 kg in mass. The change of pool geometry during the numerical calculation was neglected. The peak temperature sizably decreased by about 60 K as the fission products were released from the pool. (author)
Directory of Open Access Journals (Sweden)
Yang Zhang
2015-11-01
Full Text Available Prognostics is necessary to ensure the reliability and safety of lithium-ion batteries for hybrid electric vehicles or satellites. This process can be achieved by capacity estimation, which is a direct fading indicator for assessing the state of health of a battery. However, the capacity of a lithium-ion battery onboard is difficult to monitor. This paper presents a data-driven approach for online capacity estimation. First, six novel features are extracted from cyclic charge/discharge cycles and used as indirect health indicators. An adaptive multi-kernel relevance machine (MKRVM based on accelerated particle swarm optimization algorithm is used to determine the optimal parameters of MKRVM and characterize the relationship between extracted features and battery capacity. The overall estimation process comprises offline and online stages. A supervised learning step in the offline stage is established for model verification to ensure the generalizability of MKRVM for online application. Cross-validation is further conducted to validate the performance of the proposed model. Experiment and comparison results show the effectiveness, accuracy, efficiency, and robustness of the proposed approach for online capacity estimation of lithium-ion batteries.
Institute of Scientific and Technical Information of China (English)
E.M.E. ZAYED
2004-01-01
The asymptotic expansion of the heat kernel Θ(t)(∞∑=(i=0))exp (-λi) where({λi}∞i=1) Are the eigen-values of negative Laplacian( -△n=-n∑k=1(θ/θxk)2)in Rn(n=2 or 3) is studied for short-time t for a general bounded domainθΩwith a smooth boundary θΩ.In this paper, we consider the case of a finite number of the Dirichlet conditions φ=0 on Γi (i = J +1,….,J)and the Neumann conditions and (θφ/θ vi) = 0 on Γi (i = J+1,…,k) and the Robin condition (θφ/θ vi+γi) θ=(I=k+1,… m) where γi are piecewise smooth positive impedancem(θφ=mUi=1Γi. )We construct the required asymptotics in the form of a power series over t. The senior coe.cients inthis series are speci.ed as functionals of the geometric shape of the domain Ω.This result is applied to calculatethe one-particle partition function of a "special ideal gas", i.e., the set of non-interacting particles set up in abox with Dirichlet, Neumann and Robin boundary conditions for the appropriate wave function. Calculationof the thermodynamic quantities for the ideal gas such as the internal energy, pressure and speci.c heat revealsthat these quantities alone are incapable of distinguishing between two di.erent shapes of the domain. Thisconclusion seems to be intuitively clear because it is based on a limited information given by a one-particlepartition function; nevertheless, its formal theoretical motivation is of some interest.
The Impacts of Heating Strategy on Soil Moisture Estimation Using Actively Heated Fiber Optics.
Dong, Jianzhi; Agliata, Rosa; Steele-Dunne, Susan; Hoes, Olivier; Bogaard, Thom; Greco, Roberto; van de Giesen, Nick
2017-09-13
Several recent studies have highlighted the potential of Actively Heated Fiber Optics (AHFO) for high resolution soil moisture mapping. In AHFO, the soil moisture can be calculated from the cumulative temperature ( T cum ), the maximum temperature ( T max ), or the soil thermal conductivity determined from the cooling phase after heating ( λ ). This study investigates the performance of the T cum , T max and λ methods for different heating strategies, i.e., differences in the duration and input power of the applied heat pulse. The aim is to compare the three approaches and to determine which is best suited to field applications where the power supply is limited. Results show that increasing the input power of the heat pulses makes it easier to differentiate between dry and wet soil conditions, which leads to an improved accuracy. Results suggest that if the power supply is limited, the heating strength is insufficient for the λ method to yield accurate estimates. Generally, the T cum and T max methods have similar accuracy. If the input power is limited, increasing the heat pulse duration can improve the accuracy of the AHFO method for both of these techniques. In particular, extending the heating duration can significantly increase the sensitivity of T cum to soil moisture. Hence, the T cum method is recommended when the input power is limited. Finally, results also show that up to 50% of the cable temperature change during the heat pulse can be attributed to soil background temperature, i.e., soil temperature changed by the net solar radiation. A method is proposed to correct this background temperature change. Without correction, soil moisture information can be completely masked by the background temperature error.
Estimating Antarctic Geothermal Heat Flux using Gravity Inversion
Vaughan, Alan P. M.; Kusznir, Nick J.; Ferraccioli, Fausto; Leat, Phil T.; Jordan, Tom A. R. M.; Purucker, Michael E.; Golynsky, A. V.; Sasha Rogozhina, Irina
2013-04-01
Geothermal heat flux (GHF) in Antarctica is very poorly known. We have determined (Vaughan et al. 2012) top basement heat-flow for Antarctica and adjacent rifted continental margins using gravity inversion mapping of crustal thickness and continental lithosphere thinning (Chappell & Kusznir 2008). Continental lithosphere thinning and post-breakup residual thicknesses of continental crust determined from gravity inversion have been used to predict the preservation of continental crustal radiogenic heat productivity and the transient lithosphere heat-flow contribution within thermally equilibrating rifted continental and oceanic lithosphere. The sensitivity of present-day Antarctic top basement heat-flow to initial continental radiogenic heat productivity, continental rift and margin breakup age has been examined. Knowing GHF distribution for East Antarctica and the Gamburtsev Subglacial Mountains (GSM) region in particular is critical because: 1) The GSM likely acted as key nucleation point for the East Antarctic Ice Sheet (EAIS); 2) the region may contain the oldest ice of the EAIS - a prime target for future ice core drilling; 3) GHF is important to understand proposed ice accretion at the base of the EAIS in the GSM and its links to sub-ice hydrology (Bell et al. 2011). An integrated multi-dataset-based GHF model for East Antarctica is planned that will resolve the wide range of estimates previously published using single datasets. The new map and existing GHF distribution estimates available for Antarctica will be evaluated using direct ice temperature measurements obtained from deep ice cores, estimates of GHF derived from subglacial lakes, and a thermodynamic ice-sheet model of the Antarctic Ice Sheet driven by past climate reconstructions and each of analysed heat flow maps, as has recently been done for the Greenland region (Rogozhina et al. 2012). References Bell, R.E., Ferraccioli, F., Creyts, T.T., Braaten, D., Corr, H., Das, I., Damaske, D., Frearson, N
Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing
2012-01-01
Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.
Pipeline heating method based on optimal control and state estimation
Energy Technology Data Exchange (ETDEWEB)
Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu
2010-07-01
In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem
Series load induction heating inverter state estimator using Kalman filter
Directory of Open Access Journals (Sweden)
Szelitzky T.
2011-12-01
Full Text Available LQR and H2 controllers require access to the states of the controlled system. The method based on description function with Fourier series results in a model with immeasurable states. For this reason, we proposed a Kalman filter based state estimator, which not only filters the input signals, but also computes the unobservable states of the system. The algorithm of the filter was implemented in LabVIEW v8.6 and tested on recorded data obtained from a 10-40 kHz series load frequency controlled induction heating inverter.
Using Gravity Inversion to Estimate Antarctic Geothermal Heat Flux
Vaughan, Alan P. M.; Kusznir, Nick J.; Ferraccioli, Fausto; Leat, Phil T.; Jordan, Tom A. R. M.; Purucker, Michael E.; (Sasha) Golynsky, A. V.; Rogozhina, Irina
2014-05-01
New modelling studies for Greenland have recently underlined the importance of GHF for long-term ice sheet behaviour (Petrunin et al. 2013). Revised determinations of top basement heat-flow for Antarctica and adjacent rifted continental margins using gravity inversion mapping of crustal thickness and continental lithosphere thinning (Chappell & Kusznir 2008), using BedMap2 data have provided improved estimates of geothermal heat flux (GHF) in Antarctica where it is very poorly known. Continental lithosphere thinning and post-breakup residual thicknesses of continental crust determined from gravity inversion have been used to predict the preservation of continental crustal radiogenic heat productivity and the transient lithosphere heat-flow contribution within thermally equilibrating rifted continental and oceanic lithosphere. The sensitivity of present-day Antarctic top basement heat-flow to initial continental radiogenic heat productivity, continental rift and margin breakup age has been examined. Recognition of the East Antarctic Rift System (EARS), a major Permian to Cretaceous age rift system that appears to extend from the continental margin at the Lambert Rift to the South Pole region, a distance of 2500 km (Ferraccioli et al. 2011) and is comparable in scale to the well-studied East African rift system, highlights that crustal variability in interior Antarctica is much greater than previously assumed. GHF is also important to understand proposed ice accretion at the base of the EAIS in the GSM and its links to sub-ice hydrology (Bell et al. 2011). References Bell, R.E., Ferraccioli, F., Creyts, T.T., Braaten, D., Corr, H., Das, I., Damaske, D., Frearson, N., Jordan, T., Rose, K., Studinger, M. & Wolovick, M. 2011. Widespread persistent thickening of the East Antarctic Ice Sheet by freezing from the base. Science, 331 (6024), 1592-1595. Chappell, A.R. & Kusznir, N.J. 2008. Three-dimensional gravity inversion for Moho depth at rifted continental margins
Energy Technology Data Exchange (ETDEWEB)
Fan, J; Fan, J; Hu, W; Wang, J [Fudan University Shanghai Cancer Center, Shanghai, Shanghai (China)
2016-06-15
Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditional probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.
International Nuclear Information System (INIS)
Fan, J; Fan, J; Hu, W; Wang, J
2016-01-01
Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditional probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
Estimation of respiratory heat flows in prediction of heat strain among Taiwanese steel workers.
Chen, Wang-Yi; Juang, Yow-Jer; Hsieh, Jung-Yu; Tsai, Perng-Jy; Chen, Chen-Peng
2017-01-01
International Organization for Standardization 7933 standard provides evaluation of required sweat rate (RSR) and predicted heat strain (PHS). This study examined and validated the approximations in these models estimating respiratory heat flows (RHFs) via convection (C res ) and evaporation (E res ) for application to Taiwanese foundry workers. The influence of change in RHF approximation to the validity of heat strain prediction in these models was also evaluated. The metabolic energy consumption and physiological quantities of these workers performing at different workloads under elevated wet-bulb globe temperature (30.3 ± 2.5 °C) were measured on-site and used in the calculation of RHFs and indices of heat strain. As the results show, the RSR model overestimated the C res for Taiwanese workers by approximately 3 % and underestimated the E res by 8 %. The C res approximation in the PHS model closely predicted the convective RHF, while the E res approximation over-predicted by 11 %. Linear regressions provided better fit in C res approximation (R 2 = 0.96) than in E res approximation (R 2 ≤ 0.85) in both models. The predicted C res deviated increasingly from the observed value when the WBGT reached 35 °C. The deviations of RHFs observed for the workers from those predicted using the RSR or PHS models did not significantly alter the heat loss via the skin, as the RHFs were in general of a level less than 5 % of the metabolic heat consumption. Validation of these approximations considering thermo-physiological responses of local workers is necessary for application in scenarios of significant heat exposure.
Kernel regression with functional response
Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe
2011-01-01
We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.
Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...
African Journals Online (AJOL)
Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...
Ngan, Henry Y. T.; Yung, Nelson H. C.; Yeh, Anthony G. O.
2015-02-01
This paper aims at presenting a comparative study of outlier detection (OD) for large-scale traffic data. The traffic data nowadays are massive in scale and collected in every second throughout any modern city. In this research, the traffic flow dynamic is collected from one of the busiest 4-armed junction in Hong Kong in a 31-day sampling period (with 764,027 vehicles in total). The traffic flow dynamic is expressed in a high dimension spatial-temporal (ST) signal format (i.e. 80 cycles) which has a high degree of similarities among the same signal and across different signals in one direction. A total of 19 traffic directions are identified in this junction and lots of ST signals are collected in the 31-day period (i.e. 874 signals). In order to reduce its dimension, the ST signals are firstly undergone a principal component analysis (PCA) to represent as (x,y)-coordinates. Then, these PCA (x,y)-coordinates are assumed to be conformed as Gaussian distributed. With this assumption, the data points are further to be evaluated by (a) a correlation study with three variant coefficients, (b) one-class support vector machine (SVM) and (c) kernel density estimation (KDE). The correlation study could not give any explicit OD result while the one-class SVM and KDE provide average 59.61% and 95.20% DSRs, respectively.
RTOS kernel in portable electrocardiograph
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
RTOS kernel in portable electrocardiograph
International Nuclear Information System (INIS)
Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A
2011-01-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
Estimation of Residential Heat Pump Consumption for Flexibility Market Applications
DEFF Research Database (Denmark)
Kouzelis, Konstantinos; Tan, Zheng-Hua; Bak-Jensen, Birgitte
2015-01-01
load of a flexible device, namely a Heat Pump (HP), out of the aggregated energy consumption of a house. The main idea for accomplishing this, is a comparison of the flexible consumer with electrically similar non-flexible consumers. The methodology is based on machine learning techniques, probability...... theory and statistics. After presenting this methodology, the general trend of the HP consumption is estimated and an hour-ahead forecast is conducted by employing Seasonal Autoregressive Integrated Moving Average modeling. In this manner, the flexible consumption is predicted, establishing the basis......Recent technological advancements have facilitated the evolution of traditional distribution grids to smart grids. In a smart grid scenario, flexible devices are expected to aid the system in balancing the electric power in a technically and economically efficient way. To achieve this, the flexible...
Estimation of the heat transfer coefficient in melt spinning process
International Nuclear Information System (INIS)
Tkatch, V I; Maksimov, V V; Grishin, A M
2009-01-01
Effect of the quenching wheel velocity in the range 20.7-26.5 m/s on the cooling rate as well as on the structure and microtopology of the contact surfaces of the glass-forming FeNiPB melt-spun ribbons has been experimentally studied. Both the values of the cooling rate and heat transfer coefficient at the wheel-ribbon interface estimated from the temperature vs. time curves recorded during melt spinning runs are in the ranges (1.6-5.2)x10 6 K/s and (2.8-5.2)x10 5 Wm -2 K -1 , respectively, for ribbon thicknesses of 31.4-22.0 μm. It was found that the density of the air pockets at the underside surface of ribbons decreases while its average depth remains essentially unchanged with the wheel velocity. Using the surface quality parameters the values of the heat transfer coefficient in the areas of direct ribbon-wheel contact were evaluated to be ranging from 5.75 to 6.65x10 5 Wm -2 K -1 .
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2016-01-01
To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Aleiferis, P.G.; Taylor, A.M.K.P. [Imperial College of Science, Technology and Medicine, London (United Kingdom). Dept. of Mechanical Engineering; Ishii, K. [Honda International Technical School, Saitama (Japan); Urata, Y. [Honda R and D Co., Ltd., Tochigi (Japan). Tochigi R and D Centre
2004-04-01
The potential of lean combustion for the reduction in exhaust emissions and fuel consumption in spark ignition engines has long been established. However, the operating range of lean-burn spark ignition engines is limited by the level of cyclic variability in the early-flame development stage that typically corresponds to the 0-5 per cent mass fraction burned duration. In the current study, the cyclic variations in early flame development were investigated in an optical stratified-charge spark ignition engine at conditions close to stoichiometry [air-to-fuel ratio (A/F) = 15] and to the lean limit of stable operation (A/F = 22). Flame images were acquired through either a pentroof window ('tumble plane' of view) or the piston crown ('swirl plane' of view) and these were processed to calculate the intra-cycle flame-kernel radius evolution. In order to quantify the relative effects of local fuel concentration, gas motion, spark-energy release and heat losses to the electrodes on the flame-kernel growth rate, a zero-dimensional flame-kernel growth model, in conjunction with a one-dimensional spark ignition model, was employed. Comparison of the calculated flame-radius evolutions with the experimental data suggested that a variation in A/F around the spark plug of {delta}(A/F) {approx} 4 or, in terms of equivalence ratio {phi}, a variation in {delta}{phi} {approx} 0.15 at most was large enough to account for 100 per cent of the observed cyclic variability in flame-kernel radius. A variation in the residual-gas fraction of about 20 per cent around the mean was found to account for up to 30 per cent of the variability in flame-kernel radius at the timing of 5 per cent mass fraction burned. The individual effect of 20 per cent variations in the 'mean' in-cylinder velocity at the spark plug at ignition timing was found to account for no more than 20 per cent of the measured cyclic variability in flame kernel radius. An individual effect of
Tracer Testing for Estimating Heat Transfer Area in Fractured Reservoirs
Energy Technology Data Exchange (ETDEWEB)
Pruess, Karsten; van Heel, Ton; Shan, Chao
2004-05-12
A key parameter governing the performance and life-time of a Hot Fractured Rock (HFR) reservoir is the effective heat transfer area between the fracture network and the matrix rock. We report on numerical modeling studies into the feasibility of using tracer tests for estimating heat transfer area. More specifically, we discuss simulation results of a new HFR characterization method which uses surface-sorbing tracers for which the adsorbed tracer mass is proportional to the fracture surface area per unit volume. Sorption in the rock matrix is treated with the conventional formulation in which tracer adsorption is volume-based. A slug of solute tracer migrating along a fracture is subject to diffusion across the fracture walls into the adjacent rock matrix. Such diffusion removes some of the tracer from the fluid in the fractures, reducing and retarding the peak in the breakthrough curve (BTC) of the tracer. After the slug has passed the concentration gradient reverses, causing back-diffusion from the rock matrix into the fracture, and giving rise to a long tail in the BTC of the solute. These effects become stronger for larger fracture-matrix interface area, potentially providing a means for estimating this area. Previous field tests and modeling studies have demonstrated characteristic tailing in BTCs for volatile tracers in vapor-dominated reservoirs. Simulated BTCs for solute tracers in single-phase liquid systems show much weaker tails, as would be expected because diffusivities are much smaller in the aqueous than in the gas phase, by a factor of order 1000. A much stronger signal of fracture-matrix interaction can be obtained when sorbing tracers are used. We have performed simulation studies of surface-sorbing tracers by implementing a model in which the adsorbed tracer mass is assumed proportional to the fracture-matrix surface area per unit volume. The results show that sorbing tracers generate stronger tails in BTCs, corresponding to an effective
Ionkin, I. L.; Ragutkin, A. V.; Luning, B.; Zaichenko, M. N.
2016-06-01
For enhancement of the natural gas utilization efficiency in boilers, condensation heat utilizers of low-potential heat, which are constructed based on a contact heat exchanger, can be applied. A schematic of the contact heat exchanger with a humidifier for preheating and humidifying of air supplied in the boiler for combustion is given. Additional low-potential heat in this scheme is utilized for heating of the return delivery water supplied from a heating system. Preheating and humidifying of air supplied for combustion make it possible to use the condensation utilizer for heating of a heat-transfer agent to temperature exceeding the dewpoint temperature of water vapors contained in combustion products. The decision to mount the condensation heat utilizer on the boiler was taken based on the preliminary estimation of the additionally obtained heat. The operation efficiency of the condensation heat utilizer is determined by its structure and operation conditions of the boiler and the heating system. The software was developed for the thermal design of the condensation heat utilizer equipped by the humidifier. Computation investigations of its operation are carried out as a function of various operation parameters of the boiler and the heating system (temperature of the return delivery water and smoke fumes, air excess, air temperature at the inlet and outlet of the condensation heat utilizer, heating and humidifying of air in the humidifier, and portion of the circulating water). The heat recuperation efficiency is estimated for various operation conditions of the boiler and the condensation heat utilizer. Recommendations on the most effective application of the condensation heat utilizer are developed.
International Nuclear Information System (INIS)
Massaro, F.; Funk, S.; D'Abrusco, R.; Paggi, A.; Smith, Howard A.; Masetti, N.; Giroletti, M.; Tosti, G.
2013-01-01
Nearly one-third of the γ-ray sources detected by Fermi are still unidentified, despite significant recent progress in this area. However, all of the γ-ray extragalactic sources associated in the second Fermi-LAT catalog have a radio counterpart. Motivated by this observational evidence, we investigate all the radio sources of the major radio surveys that lie within the positional uncertainty region of the unidentified γ-ray sources (UGSs) at a 95% level of confidence. First, we search for their infrared counterparts in the all-sky survey performed by the Wide-field Infrared Survey Explorer (WISE) and then we analyze their IR colors in comparison with those of the known γ-ray blazars. We propose a new approach, on the basis of a two-dimensional kernel density estimation technique in the single [3.4] – [4.6] – [12] μm WISE color-color plot, replacing the constraint imposed in our previous investigations on the detection at 22 μm of each potential IR counterpart of the UGSs with associated radio emission. The main goal of this analysis is to find distant γ-ray blazar candidates that, being too faint at 22 μm, are not detected by WISE and thus are not selected by our purely IR-based methods. We find 55 UGSs that likely correspond to radio sources with blazar-like IR signatures. An additional 11 UGSs that have blazar-like IR colors have been found within the sample of sources found with deep recent Australia Telescope Compact Array observations
Heat flux estimation for neutral beam line components using inverse heat conduction procedures
International Nuclear Information System (INIS)
Bharathi, P.; Prahlad, V.; Quereshi, K.; Bansal, L.K.; Rambabu, S.; Sharma, S.K.; Parmar, S.; Patel, P.J.; Baruah, U.K.; Patel, Ravi
2015-01-01
In this work, we describe and compare the analytical IHCP methods such-as semi-infinite method, finite slab method and a numerical method called Stolz method for estimating the incident heat flux from the experimentally measured temperature data. In case of analytical methods, the finite time response of the sensor is needed to be accounted for an accurate power density estimations. The modified models corrected for the response time of the sensors are also discussed in this paper. Application of these methods using example temperature waveforms obtained on the SST1-NBI test stand is presented and discussed. For choosing the suitable method for the calorimetry on beam line components, the estimated results are also validated using the ANSYS analysis done on these beam Iine components. As a conclusion, the finite slab method corrected for the influence of the sensor response time found out to be the most suitable method for the inversion of temperature data in case of neutral beam line components
Settar, Abdelhakim; Abboudi, Saïd; Madani, Brahim; Nebbali, Rachid
2018-02-01
Due to the endothermic nature of the steam methane reforming reaction, the process is often limited by the heat transfer behavior in the reactors. Poor thermal behavior sometimes leads to slow reaction kinetics, which is characterized by the presence of cold spots in the catalytic zones. Within this framework, the present work consists on a numerical investigation, in conjunction with an experimental one, on the one-dimensional heat transfer phenomenon during the heat supply of a catalytic-wall reactor, which is designed for hydrogen production. The studied reactor is inserted in an electric furnace where the heat requirement of the endothermic reaction is supplied by electric heating system. During the heat supply, an unknown heat flux density, received by the reactive flow, is estimated using inverse methods. In the basis of the catalytic-wall reactor model, an experimental setup is engineered in situ to measure the temperature distribution. Then after, the measurements are injected in the numerical heat flux estimation procedure, which is based on the Function Specification Method (FSM). The measured and estimated temperatures are confronted and the heat flux density which crosses the reactor wall is determined.
Directory of Open Access Journals (Sweden)
Adrien Rieux
Full Text Available Given its biological significance, determining the dispersal kernel (i.e., the distribution of dispersal distances of spore-producing pathogens is essential. Here, we report two field experiments designed to measure disease gradients caused by sexually- and asexually-produced spores of the wind-dispersed banana plant fungus Mycosphaerella fijiensis. Gradients were measured during a single generation and over 272 traps installed up to 1000 m along eight directions radiating from a traceable source of inoculum composed of fungicide-resistant strains. We adjusted several kernels differing in the shape of their tail and tested for two types of anisotropy. Contrasting dispersal kernels were observed between the two types of spores. For sexual spores (ascospores, we characterized both a steep gradient in the first few metres in all directions and rare long-distance dispersal (LDD events up to 1000 m from the source in two directions. A heavy-tailed kernel best fitted the disease gradient. Although ascospores distributed evenly in all directions, average dispersal distance was greater in two different directions without obvious correlation with wind patterns. For asexual spores (conidia, few dispersal events occurred outside of the source plot. A gradient up to 12.5 m from the source was observed in one direction only. Accordingly, a thin-tailed kernel best fitted the disease gradient, and anisotropy in both density and distance was correlated with averaged daily wind gust. We discuss the validity of our results as well as their implications in terms of disease diffusion and management strategy.
Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan
2018-05-01
With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
International Nuclear Information System (INIS)
Lee, Haw-Long; Chang, Win-Jin; Chen, Wen-Lih; Yang, Yu-Ching
2012-01-01
Highlights: ► Time-dependent base heat flux of a functionally graded fin is inversely estimated. ► An inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied. ► The distributions of temperature in the fin are determined as well. ► The influence of measurement error and measurement location upon the precision of the estimated results is also investigated. - Abstract: In this study, an inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied to estimate the unknown time-dependent base heat flux of a functionally graded fin from the knowledge of temperature measurements taken within the fin. Subsequently, the distributions of temperature in the fin can be determined as well. It is assumed that no prior information is available on the functional form of the unknown base heat flux; hence the procedure is classified as the function estimation in inverse calculation. The temperature data obtained from the direct problem are used to simulate the temperature measurements. The influence of measurement errors and measurement location upon the precision of the estimated results is also investigated. Results show that an excellent estimation on the time-dependent base heat flux and temperature distributions can be obtained for the test case considered in this study.
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean-Philippe; Grimmond, Sue; Feigenwinter, Christian; Lindberg, Fredrik; Del Frate, Fabio; Klostermann, Judith; Mitraka, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans
2017-04-01
The H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the impacts of UEB fluxes on urban heat island and consequently on energy consumption in cities. In URBANFLUXES, the anthropogenic heat flux is estimated as a residual of UEB. Therefore, the rest UEB components, namely, the net all-wave radiation, the net change in heat storage and the turbulent sensible and latent heat fluxes are independently estimated from Earth Observation (EO), whereas the advection term is included in the error of the anthropogenic heat flux estimation from the UEB closure. The Discrete Anisotropic Radiative Transfer (DART) model is employed to improve the estimation of the net all-wave radiation balance, whereas the Element Surface Temperature Method (ESTM), adjusted to satellite observations is used to improve the estimation the estimation of the net change in heat storage. Furthermore the estimation of the turbulent sensible and latent heat fluxes is based on the Aerodynamic Resistance Method (ARM). Based on these outcomes, QF is estimated by regressing the sum of the turbulent heat fluxes versus the available energy. In-situ flux measurements are used to evaluate URBANFLUXES outcomes, whereas uncertainties are specified and analyzed. URBANFLUXES is expected to prepare the ground for further innovative exploitation of EO in scientific activities (climate variability studies at local and regional scales) and future and emerging applications (sustainable urban planning, mitigation technologies) to benefit climate change mitigation/adaptation. This study presents the results of the second phase of the project and detailed information on URBANFLUXES is available at: http://urbanfluxes.eu
An Estimate of Chromospheric Heating by Acoustic Waves
Czech Academy of Sciences Publication Activity Database
Sobotka, Michal; Švanda, Michal; Jurčák, Jan; Heinzel, Petr; Del Moro, D.; Berrilli, F.
2014-01-01
Roč. 38, č. 1 (2014), s. 53-58 ISSN 1845-8319 R&D Projects: GA ČR(CZ) GA14-04338S; GA ČR GPP209/12/P568; GA ČR GAP209/12/0287 Institutional support: RVO:67985815 Keywords : Sun * chromosphere * heating Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
Gärtner, Thomas
2009-01-01
This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by
International Nuclear Information System (INIS)
Chen, W.-L.; Yang, Y.-C.; Chang, W.-J.; Lee, H.-L.
2008-01-01
In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown space and time dependent heat transfer rate on the external wall of a pipe system using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown heat transfer rate; hence, the procedure is classified as function estimation in the inverse calculation. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation of the space and time dependent heat transfer rate can be obtained for the test case considered in this study
Katunin, A.
2018-03-01
The critical self-heating temperature at which the structural degradation of polymer composites under cyclic loading begins is evaluated by analyzing the heat dissipation rate. The method proposed is an effective tool for evaluating the degradation degree of such structures.
Locally linear approximation for Kernel methods : the Railway Kernel
Muñoz, Alberto; González, Javier
2008-01-01
In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...
Motai, Yuichi
2015-01-01
Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include
DEFF Research Database (Denmark)
Petersen, Annette
of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...
Pizzo, Michelle; Daryabeigi, Kamran; Glass, David
2015-01-01
The ability to solve the heat conduction equation is needed when designing materials to be used on vehicles exposed to extremely high temperatures; e.g. vehicles used for atmospheric entry or hypersonic flight. When using test and flight data, computational methods such as finite difference schemes may be used to solve for both the direct heat conduction problem, i.e., solving between internal temperature measurements, and the inverse heat conduction problem, i.e., using the direct solution to march forward in space to the surface of the material to estimate both surface temperature and heat flux. The completed research first discusses the methods used in developing a computational code to solve both the direct and inverse heat transfer problems using one dimensional, centered, implicit finite volume schemes and one dimensional, centered, explicit space marching techniques. The developed code assumed the boundary conditions to be specified time varying temperatures and also considered temperature dependent thermal properties. The completed research then discusses the results of analyzing temperature data measured while radiantly heating a carbon/carbon specimen up to 1920 F. The temperature was measured using thermocouple (TC) plugs (small carbon/carbon material specimens) with four embedded TC plugs inserted into the larger carbon/carbon specimen. The purpose of analyzing the test data was to estimate the surface heat flux and temperature values from the internal temperature measurements using direct and inverse heat transfer methods, thus aiding in the thermal and structural design and analysis of high temperature vehicles.
Energy Technology Data Exchange (ETDEWEB)
Boehlke, S.; Niegoth, H. [STEAG Energy Services GmbH, Essen (Germany). Nuclear Technologies; Stalder, I. [Kernkraftwerk Leibstadt AG, Leibstadt (Switzerland)
2012-11-01
In the nuclear power plant Leibstadt (KKL) during the next year large components will be dismantled and stored for final disposal within the interim storage facility ZENT at the NPP site. Before construction of ZENT appropriate estimations of the local dose rate inside and outside the building and the collective dose for the normal operation have to be performed. The shielding calculations are based on the properties of the stored components and radiation sources and on the concepts for working place requirements. The installation of control and monitoring areas will depend on these calculations. For the determination of the shielding potential of concrete walls and steel doors with the defined boundary conditions point-kernel codes like MICROSHIELd {sup registered} are used. Complex problems cannot be modeled with this code. Therefore the point-kernel code VISIPLAN {sup registered} was developed for the determination of the local dose distribution functions in 3D models. The possibility of motion sequence inputs allows an optimization of collective dose estimations for the operational phases of a nuclear facility.
Methodology for estimation of time-dependent surface heat flux due to cryogen spray cooling.
Tunnell, James W; Torres, Jorge H; Anvari, Bahman
2002-01-01
Cryogen spray cooling (CSC) is an effective technique to protect the epidermis during cutaneous laser therapies. Spraying a cryogen onto the skin surface creates a time-varying heat flux, effectively cooling the skin during and following the cryogen spurt. In previous studies mathematical models were developed to predict the human skin temperature profiles during the cryogen spraying time. However, no studies have accounted for the additional cooling due to residual cryogen left on the skin surface following the spurt termination. We formulate and solve an inverse heat conduction (IHC) problem to predict the time-varying surface heat flux both during and following a cryogen spurt. The IHC formulation uses measured temperature profiles from within a medium to estimate the surface heat flux. We implement a one-dimensional sequential function specification method (SFSM) to estimate the surface heat flux from internal temperatures measured within an in vitro model in response to a cryogen spurt. Solution accuracy and experimental errors are examined using simulated temperature data. Heat flux following spurt termination appears substantial; however, it is less than that during the spraying time. The estimated time-varying heat flux can subsequently be used in forward heat conduction models to estimate temperature profiles in skin during and following a cryogen spurt and predict appropriate timing for onset of the laser pulse.
Estimation of shutdown heat generation rates in GHARR-1 due to ...
African Journals Online (AJOL)
Fission products decay power and residual fission power generated after shutdown of Ghana Research Reactor-1 (GHARR-1) by reactivity insertion accident were estimated by solution of the decay and residual heat equations. A Matlab program code was developed to simulate the heat generation rates by fission product ...
Best estimate radiation heat transfer model developed for TRAC-BD1
International Nuclear Information System (INIS)
Spore, J.W.; Giles, M.M.; Shumway, R.W.
1981-01-01
A best estimate radiation heat transfer model for analysis of BWR fuel bundles has been developed and compared with 8 x 8 fuel bundle data. The model includes surface-to-surface and surface-to-two-phase fluid radiation heat transfer. A simple method of correcting for anisotropic reflection effects has been included in the model
Geothermal Heat Flux Underneath Ice Sheets Estimated From Magnetic Satellite Data
DEFF Research Database (Denmark)
Fox Maule, Cathrine; Purucker, M.E.; Olsen, Nils
The geothermal heat flux is an important factor in the dynamics of ice sheets, and it is one of the important parameters in the thermal budgets of subglacial lakes. We have used satellite magnetic data to estimate the geothermal heat flux underneath the ice sheets in Antarctica and Greenland...
On estimation of reliability for pipe lines of heat power plants under cyclic loading
International Nuclear Information System (INIS)
Verezemskij, V.G.
1986-01-01
One of the possible methods to obtain a quantitative estimate of the reliability for pipe lines of the welded heat power plants under cyclic loading due to heating-cooling and due to vibration is considered. Reliability estimate is carried out for a common case of loading by simultaneous cycles with different amplitudes and loading asymmetry. It is shown that scattering of the breaking number of cycles for the metal of welds may perceptibly decrease reliability of the welded pipe line
Estimation of pressure drop in gasket plate heat exchangers
Directory of Open Access Journals (Sweden)
Neagu Anisoara Arleziana
2016-06-01
Full Text Available In this paper, we present comparatively different methods of pressure drop calculation in the gasket plate heat exchangers (PHEs, using correlations recommended in literature on industrial data collected from a vegetable oil refinery. The goal of this study was to compare the results obtained with these correlations, in order to choose one or two for practical purpose of pumping power calculations. We concluded that pressure drop values calculated with Mulley relationship and Buonopane & Troupe correlation were close and also Bond’s equation gave results pretty close to these but the pressure drop is slightly underestimated. Kumar correlation gave results far from all the others and its application will lead to oversize. In conclusion, for further calculations we will chose either the Mulley relationship or the Buonopane & Troupe correlation.
Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan
This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...
International Nuclear Information System (INIS)
Statham, B.A.
2009-01-01
RELAP5/SCDAPSIM MOD 3.4 is used to predict wall temperature before and after critical heat flux (CHF) is reached in a vertical, uniformly heated tube using light water as the working fluid. The heated test section is modeled as a 1 m long Inconel 600 tube having an OD of 6.35 mm and ID of 4.57 mm with a 0.5 m long unheated development length at the inlet. Simulations are performed at pressures of 0.5 to 2.0 MPa with mass fluxes from 500 to 2000 kg m -2 s -1 and inlet qualities ranging from -0.2 to 0. Loss of flow simulations are performed with flow reduction rates of 10, 20, 50, and 100 kg m -2 s -2 . Inlet mass flux at CHF was nominally independent of rate in the model; this may or may not be realistic. (author)
Empirically Estimated Heats of Combustion of Oxygenated Hydrocarbon Bio-type Oils
Directory of Open Access Journals (Sweden)
Dmitry A. Ponomarev
2015-04-01
Full Text Available An empirical method is proposed by which the heats of combustion of oxygenated hydrocarbon oils, typically found from wood pyrolysis, may be calculated additively from empirically predicted heats of combustion of individual compounds. The predicted values are in turn based on four types of energetically inequivalent carbon and four types of energetically inequivalent hydrogen atomic energy values. A method is also given to estimate the condensation heats of oil mixtures based on the presence of four types of intermolecular forces. Agreement between predicted and experimental values of combustion heats for a typical mixture of known compounds was ± 2% and < 1% for a freshly prepared mixture of known compounds.
Zhou, Yuyu; Weng, Qihao; Gurney, Kevin R.; Shuai, Yanmin; Hu, Xuefei
2012-01-01
This paper examined the relationship between remotely sensed anthropogenic heat discharge and energy use from residential and commercial buildings across multiple scales in the city of Indianapolis, Indiana, USA. The anthropogenic heat discharge was estimated with a remote sensing-based surface energy balance model, which was parameterized using land cover, land surface temperature, albedo, and meteorological data. The building energy use was estimated using a GIS-based building energy simulation model in conjunction with Department of Energy/Energy Information Administration survey data, the Assessor's parcel data, GIS floor areas data, and remote sensing-derived building height data. The spatial patterns of anthropogenic heat discharge and energy use from residential and commercial buildings were analyzed and compared. Quantitative relationships were evaluated across multiple scales from pixel aggregation to census block. The results indicate that anthropogenic heat discharge is consistent with building energy use in terms of the spatial pattern, and that building energy use accounts for a significant fraction of anthropogenic heat discharge. The research also implies that the relationship between anthropogenic heat discharge and building energy use is scale-dependent. The simultaneous estimation of anthropogenic heat discharge and building energy use via two independent methods improves the understanding of the surface energy balance in an urban landscape. The anthropogenic heat discharge derived from remote sensing and meteorological data may be able to serve as a spatial distribution proxy for spatially-resolved building energy use, and even for fossil-fuel CO2 emissions if additional factors are considered.
Cost estimation of hydrogen and DME produced by nuclear heat utilization system. Joint research
International Nuclear Information System (INIS)
Shiina, Yasuaki; Nishihara, Tetsuo
2003-09-01
Research of hydrogen energy has been performed in order to spread use of the hydrogen energy in 2020 or 2030. It will take, however, many years for the hydrogen energy to be used very easily like gasoline, diesel oil and city gas in all of countries. During the periods, low CO 2 release liquid fuels would be used together with hydrogen. Recently, di-methyl-either (DME) has been noticed as one of the substitute liquid fuels of petroleum. Such liquid fuels can be produced from the mixed gas such as hydrogen and carbon oxide which are produced by steam reforming hydrogen generation system by the use of nuclear heat. Therefore, the system would be one of the candidates of future system of nuclear heat utilization. In the present study, we focused on the production of hydrogen and DME. Economic evaluation was estimated for hydrogen and DME production in commercial and nuclear heat utilization plant. At first, heat and mass balance of each process in commercial plant of hydrogen production was estimated and commercial prices of each process were derived. Then, price was estimated when nuclear heat was used instead of required heat of commercial plant. Results showed that the production prices produced by nuclear heat were cheaper by 10% for hydrogen and 3% for DME. With the consideration of reduction effect of CO 2 release, utilization of nuclear heat would be more effective. (author)
International Nuclear Information System (INIS)
Azimi, A.; Hannani, S.K.; Farhanieh, B.
2005-01-01
In this article, a comparison between two iterative inverse techniques to solve simultaneously two unknown functions of axisymmetric transient inverse heat conduction problems in semi complex geometries is presented. The multi-block structured grid together with blocked-interface nodes is implemented for geometric decomposition of physical domain. Numerical scheme for solution of transient heat conduction equation is the finite element method with frontal technique to solve algebraic system of discrete equations. The inverse heat conduction problem involves simultaneous unknown time varying heat generation and time-space varying boundary condition estimation. Two parameter-estimation techniques are considered, Levenberg-Marquardt scheme and conjugate gradient method with adjoint problem. Numerically computed exact and noisy data are used for the measured transient temperature data needed in the inverse solution. The results of the present study for a configuration including two joined disks with different heights are compared to those of exact heat source and temperature boundary condition, and show good agreement. (author)
Estimating end-use emissions factors for policy analysis: the case of space cooling and heating.
Jacobsen, Grant D
2014-06-17
This paper provides the first estimates of end-use specific emissions factors, which are estimates of the amount of a pollutant that is emitted when a unit of electricity is generated to meet demand from a specific end-use. In particular, this paper provides estimates of emissions factors for space cooling and heating, which are two of the most significant end-uses. The analysis is based on a novel two-stage regression framework that estimates emissions factors that are specific to cooling or heating by exploiting variation in cooling and heating demand induced by weather variation. Heating is associated with similar or greater CO2 emissions factor than cooling in all regions. The difference is greatest in the Midwest and Northeast, where the estimated CO2 emissions factor for heating is more than 20% larger than the emissions factor for cooling. The minor differences in emissions factors in other regions, combined with the substantial difference in the demand pattern for cooling and heating, suggests that the use of overall regional emissions factors is reasonable for policy evaluations in certain locations. Accurately quantifying the emissions factors associated with different end-uses across regions will aid in designing improved energy and environmental policies.
Estimation of the vaporization heat of organic liquids. Pt. 3
International Nuclear Information System (INIS)
Ducros, M.; Sannier, H.
1982-01-01
In our previous publications it has been shown that the method of Benson's group permits the estimation of the enthalpies of vaporization of organic compounds. In the present paper we have applied this method for unsaturated hydrocarbons, thus completing our previous work on acyclic alkenes. For the alkylbenzenes we have changed the values of the groups C-(Csub(b))(C)(H) 2 and C-(Csub(b))(C) 2 (H) previously determined. A more accurate value for the enthalpies of vaporization of the alkylbenzenes of higher molecular weight is obtained. (orig.)
Estimation of formation heat of rare earth and actinide alloys
International Nuclear Information System (INIS)
Shubin, A.B.; Yamshchikov, L.F.; Raspopin, S.P.
1986-01-01
A method for forecasting the enthalpy of formation of scandium, yttrium, lanthanum and lanthanides, thorium, uranium and plutonium alloys with a series of fusible metals (Al, Ga, In, Tl, Sn, Pb, Sb, Bi) is proposed. The obtained confidence internal value for the calculated Δ f H 0 values exceeds sufficiently the random error of the experimental determination of the rare metal alloy formation enthalpies. However, taking into account considerable divergences in results of Δ f H 0 determinations performed by different science groups, one may conclude, that such forecasting accuracy may be useful in the course of estimation calculations, especially, for actinide element alloys
A Bayesian approach to estimate sensible and latent heat over vegetated land surface
Directory of Open Access Journals (Sweden)
C. van der Tol
2009-06-01
Full Text Available Sensible and latent heat fluxes are often calculated from bulk transfer equations combined with the energy balance. For spatial estimates of these fluxes, a combination of remotely sensed and standard meteorological data from weather stations is used. The success of this approach depends on the accuracy of the input data and on the accuracy of two variables in particular: aerodynamic and surface conductance. This paper presents a Bayesian approach to improve estimates of sensible and latent heat fluxes by using a priori estimates of aerodynamic and surface conductance alongside remote measurements of surface temperature. The method is validated for time series of half-hourly measurements in a fully grown maize field, a vineyard and a forest. It is shown that the Bayesian approach yields more accurate estimates of sensible and latent heat flux than traditional methods.
Methodology for estimation of potential for solar water heating in a target area
International Nuclear Information System (INIS)
Pillai, Indu R.; Banerjee, Rangan
2007-01-01
Proper estimation of potential of any renewable energy technology is essential for planning and promotion of the technology. The methods reported in literature for estimation of potential of solar water heating in a target area are aggregate in nature. A methodology for potential estimation (technical, economic and market potential) of solar water heating in a target area is proposed in this paper. This methodology links the micro-level factors and macro-level market effects affecting the diffusion or adoption of solar water heating systems. Different sectors with end uses of low temperature hot water are considered for potential estimation. Potential is estimated at each end use point by simulation using TRNSYS taking micro-level factors. The methodology is illustrated for a synthetic area in India with an area of 2 sq. km and population of 10,000. The end use sectors considered are residential, hospitals, nursing homes and hotels. The estimated technical potential and market potential are 1700 m 2 and 350 m 2 of collector area, respectively. The annual energy savings for the technical potential in the area is estimated as 110 kW h/capita and 0.55 million-kW h/sq. km. area, with an annual average peak saving of 1 MW. The annual savings is 650-kW h per m 2 of collector area and accounts for approximately 3% of the total electricity consumption of the target area. Some of the salient features of the model are the factors considered for potential estimation; estimation of electrical usage pattern for typical day, amount of electricity savings and savings during the peak load. The framework is general and enables accurate estimation of potential of solar water heating for a city, block. Energy planners and policy makers can use this framework for tracking and promotion of diffusion of solar water heating systems. (author)
Directory of Open Access Journals (Sweden)
Hailun Wang
2017-01-01
Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.
DEFF Research Database (Denmark)
Chen, Tianshi; Andersen, Martin Skovgaard; Ljung, Lennart
2014-01-01
Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels...
Kernel methods for deep learning
Cho, Youngmin
2012-01-01
We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...
International Nuclear Information System (INIS)
Futami, Hikaru; Arai, Tsunenori; Yashiro, Hideki; Nakatsuka, Seishi; Kuribayashi, Sachio; Izumi, Youtaro; Tsukada, Norimasa; Kawamura, Masafumi
2006-01-01
To develop an evaluation method for the curative field when using X-ray CT imaging during percutaneous transthoracic cryoablation for lung cancer, we constructed a finite-element heat conduction simulator to estimate temperature distribution in the lung during cryo-treatment. We calculated temperature distribution using a simple two-dimensional finite element model, although the actual temperature distribution spreads in three dimensions. Temperature time-histories were measured within 10 minutes using experimental ex vivo and in vivo lung cryoablation conditions. We adjusted specific heat and thermal conductivity in the heat conduction calculation and compared them with measured temperature time-histories ex vivo. Adjusted lung specific heat was 3.7 J/ (g·deg C) for unfrozen lung and 1.8 J/ (g·deg C) for frozen lung. Adjusted lung thermal conductivity in our finite element model fitted proportionally to the exponential function of lung density. We considered the heat input by blood flow circulation and metabolic heat when we calculated the temperature time-histories during in vivo cryoablation of the lung. We assumed that the blood flow varies in inverse proportion to the change in blood viscosity up to the maximum blood flow predicted from cardiac output. Metabolic heat was set as heat generation in the calculation. The measured temperature time-histories of in vivo cryoablation were then estimated with an accuracy of ±3 deg C when calculated based on this assumption. Therefore, we successfully constructed a two-dimensional heat conduction simulator that is capable of estimating temperature distribution in the lung at the time of first freezing during cryoablation. (author)
Satellite air temperature estimation for monitoring the canopy layer heat island of Milan
DEFF Research Database (Denmark)
Pichierri, Manuele; Bonafoni, Stefania; Biondi, Riccardo
2012-01-01
across the city center from June to September confirming that, in Milan, urban heating is not an occasional phenomenon. Furthermore, this study shows the utility of space missions to monitor the metropolis heat islands if they are able to provide nighttime observations when CLHI peaks are generally......In this work, satellite maps of the urban heat island of Milan are produced using satellite-based infrared sensor data. For this aim, we developed suitable algorithms employing satellite brightness temperatures for the direct air temperature estimation 2 m above the surface (canopy layer), showing...... 2007 and 2010 were processed. Analysis of the canopy layer heat island (CLHI) maps during summer months reveals an average heat island effect of 3–4K during nighttime (with some peaks around 5K) and a weak CLHI intensity during daytime. In addition, the satellite maps reveal a well defined island shape...
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads
2011-01-01
In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...
Spafford, Eugene H.; Mckendry, Martin S.
1986-01-01
An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.
Total decay heat estimates in a proto-type fast reactor
International Nuclear Information System (INIS)
Sridharan, M.S.
2003-01-01
Full text: In this paper, total decay heat values generated in a proto-type fast reactor are estimated. These values are compared with those of certain fast reactors. Simple analytical fits are also obtained for these values which can serve as a handy and convenient tool in engineering design studies. These decay heat values taken as their ratio to the nominal operating power are, in general, applicable to any typical plutonium based fast reactor and are useful inputs to the design of decay-heat removal systems
Satellite data based approach for the estimation of anthropogenic heat flux over urban areas
Nitis, Theodoros; Tsegas, George; Moussiopoulos, Nicolas; Gounaridis, Dimitrios; Bliziotis, Dimitrios
2017-09-01
Anthropogenic effects in urban areas influence the thermal conditions in the environment and cause an increase of the atmospheric temperature. The cities are sources of heat and pollution, affecting the thermal structure of the atmosphere above them which results to the urban heat island effect. In order to analyze the urban heat island mechanism, it is important to estimate the anthropogenic heat flux which has a considerable impact on the urban energy budget. The anthropogenic heat flux is the result of man-made activities (i.e. traffic, industrial processes, heating/cooling) and thermal releases from the human body. Many studies have underlined the importance of the Anthropogenic Heat Flux to the calculation of the urban energy budget and subsequently, the estimation of mesoscale meteorological fields over urban areas. Therefore, spatially disaggregated anthropogenic heat flux data, at local and city scales, are of major importance for mesoscale meteorological models. The main objectives of the present work are to improve the quality of such data used as input for mesoscale meteorological models simulations and to enhance the application potential of GIS and remote sensing in the fields of climatology and meteorology. For this reason, the Urban Energy Budget concept is proposed as the foundation for an accurate determination of the anthropogenic heat discharge as a residual term in the surface energy balance. The methodology is applied to the cities of Athens and Paris using the Landsat ETM+ remote sensing data. The results will help to improve our knowledge on Anthropogenic Heat Flux, while the potential for further improvement of the methodology is also discussed.
Explicit signal to noise ratio in reproducing kernel Hilbert spaces
DEFF Research Database (Denmark)
Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo
2011-01-01
This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...
Inverse Estimation of Heat Flux and Temperature Distribution in 3D Finite Domain
International Nuclear Information System (INIS)
Muhammad, Nauman Malik
2009-02-01
Inverse heat conduction problems occur in many theoretical and practical applications where it is difficult or practically impossible to measure the input heat flux and the temperature of the layer conducting the heat flux to the body. Thus it becomes imperative to devise some means to cater for such a problem and estimate the heat flux inversely. Adaptive State Estimator is one such technique which works by incorporating the semi-Markovian concept into a Bayesian estimation technique thereby developing an inverse input and state estimator consisting of a bank of parallel adaptively weighted Kalman filters. The problem presented in this study deals with a three dimensional system of a cube with one end conducting heat flux and all the other sides are insulated while the temperatures are measured on the accessible faces of the cube. The measurements taken on these accessible faces are fed into the estimation algorithm and the input heat flux and the temperature distribution at each point in the system is calculated. A variety of input heat flux scenarios have been examined to underwrite the robustness of the estimation algorithm and hence insure its usability in practical applications. These include sinusoidal input flux, a combination of rectangular, linearly changing and sinusoidal input flux and finally a step changing input flux. The estimator's performance limitations have been examined in these input set-ups and error associated with each set-up is compared to conclude the realistic application of the estimation algorithm in such scenarios. Different sensor arrangements, that is different sensor numbers and their locations are also examined to impress upon the importance of number of measurements and their location i.e. close or farther from the input area. Since practically it is both economically and physically tedious to install more number of measurement sensors, hence optimized number and location is very important to determine for making the study more
Viscosity kernel of molecular fluids
DEFF Research Database (Denmark)
Puscasu, Ruslan; Todd, Billy; Daivis, Peter
2010-01-01
, temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...
Estimate of the global-scale joule heating rates in the thermosphere due to time mean currents
International Nuclear Information System (INIS)
Roble, R.G.; Matsushita, S.
1975-01-01
An estimate of the global-scale joule heating rates in the thermosphere is made based on derived global equivalent overhead electric current systems in the dynamo region during geomagnetically quiet and disturbed periods. The equivalent total electric field distribution is calculated from Ohm's law. The global-scale joule heating rates are calculated for various monthly average periods in 1965. The calculated joule heating rates maximize at high latitudes in the early evening and postmidnight sectors. During geomagnetically quiet times the daytime joule heating rates are considerably lower than heating by solar EUV radiation. However, during geomagnetically disturbed periods the estimated joule heating rates increase by an order of magnitude and can locally exceed the solar EUV heating rates. The results show that joule heating is an important and at times the dominant energy source at high latitudes. However, the global mean joule heating rates calculated near solar minimum are generally small compared to the global mean solar EUV heating rates. (auth)
International Nuclear Information System (INIS)
Ballou, J.K.; Gray, W.H.
1976-01-01
In the design of the cryogenic system and superconducting magnets for the poloidal field system in a tokamak, it is important to have an accurate estimate of the heat produced in superconducting magnets as a result of rapidly changing magnetic fields. A computer code, PLASS (Pulsed Losses in Axisymmetric Superconducting Solenoids), was written to estimate the contributions to the heat production from superconductor hysteresis losses, superconductor coupling losses, stabilizing material eddy current losses, and structural material eddy current losses. Recently, it has been shown that thermoelastic dissipation in superconducting composites can contribute as much to heat production as the other loss mechanisms mentioned above. A modification of PLASS which takes into consideration thermoelastic dissipation in superconducting composites is discussed. A comparison between superconductor thermoelastic dissipation and the other superconductor loss mechanisms is presented in terms of the poloidal coil system of the ORNL Experimental Power Reactor design
Estimation of the Heat Flow Variation in the Chad Basin Nigeria ...
African Journals Online (AJOL)
Wireline logs from 14 oil wells from the Nigerian sector of the Chad Basin were analyzed and interpreted to estimate the heat flow trend in the basin. Geothermal gradients were computed from corrected bottom hole temperatures while the bulk effective thermal conductivity for the different stratigraphic units encountered in ...
Estimation of eddy diffusivity coefficient of heat in the upper layers of equatorial Arabian Sea
Digital Repository Service at National Institute of Oceanography (India)
Zavialov, P.O.; Murty, V.S.N.
in the Central Equatorial Arabian Sea (CEAS). A comparison of the model computed K sub(h) values with those estimated from the heat balance of the upper layer (50 m) of the sea shows good agreement in the region of weak winds (CEAS) or low turbulent mixing regime...
Condition monitoring of steam generator by estimating the overall heat transfer coefficient
International Nuclear Information System (INIS)
Furusawa, Hiroaki; Gofuku, Akio
2013-01-01
This study develops a technique for monitoring in on-line the state of the steam generator of the fast-breeder reactor (FBR) “Monju”. Because the FBR uses liquid sodium as coolant, it is necessary to handle liquid sodium with caution due to its chemical characteristics. The steam generator generates steam by the heat of secondary sodium coolant. The sodium-water reaction may happen if a pinhole or crack occurs at the thin metal tube wall that separates the secondary sodium coolant and water/steam. Therefore, it is very important to detect an anomaly of the wall of heat transfer tubes at an early stage. This study aims at developing an on-line condition monitoring technique of the steam generator by estimating overall heat transfer coefficient from process signals. This paper describes simplified mathematical models of superheater and evaporator to estimate the overall heat transfer coefficient and a technique to diagnose the state of the steam generator. The applicability of the technique is confirmed by several estimations using simulated process signals with artificial noises. The results of the estimations show that the developed technique can detect the occurrence of an anomaly. (author)
Heat flux exchange estimation by using ATSR SST data in TOGA area
Xue, Yong; Lawrence, Sean P.; Llewellyn-Jones, David T.
1995-12-01
The study of phenomena such as ENSO requires consideration of the dynamics and thermodynamics of the coupled ocean-atmosphere system. The dynamic and thermal properties of the atmosphere and ocean are directly affected by air-sea transfers of fluxes of momentum, heat and moisture. In this paper, we present results of turbulent heat fluxes calculated by using two years (1992 and 1993) monthly average TOGA data and ATSR SST data in TOGA area. A comparison with published results indicates good qualitative agreement. Also, we compared the results of heat flux exchange by using ATSR SST data and by using the TOGA bucket SST data. The ATSR SST data set has been shown to be useful in helping to estimate the large space scale heat flux exchange.
Goodge, John W.
2018-02-01
Terrestrial heat flow is a critical first-order factor governing the thermal condition and, therefore, mechanical stability of Antarctic ice sheets, yet heat flow across Antarctica is poorly known. Previous estimates of terrestrial heat flow in East Antarctica come from inversion of seismic and magnetic geophysical data, by modeling temperature profiles in ice boreholes, and by calculation from heat production values reported for exposed bedrock. Although accurate estimates of surface heat flow are important as an input parameter for ice-sheet growth and stability models, there are no direct measurements of terrestrial heat flow in East Antarctica coupled to either subglacial sediment or bedrock. As has been done with bedrock exposed along coastal margins and in rare inland outcrops, valuable estimates of heat flow in central East Antarctica can be extrapolated from heat production determined by the geochemical composition of glacial rock clasts eroded from the continental interior. In this study, U, Th, and K concentrations in a suite of Proterozoic (1.2-2.0 Ga) granitoids sourced within the Byrd and Nimrod glacial drainages of central East Antarctica indicate average upper crustal heat production (Ho) of about 2.6 ± 1.9 µW m-3. Assuming typical mantle and lower crustal heat flux for stable continental shields, and a length scale for the distribution of heat production in the upper crust, the heat production values determined for individual samples yield estimates of surface heat flow (qo) ranging from 33 to 84 mW m-2 and an average of 48.0 ± 13.6 mW m-2. Estimates of heat production obtained for this suite of glacially sourced granitoids therefore indicate that the interior of the East Antarctic ice sheet is underlain in part by Proterozoic continental lithosphere with an average surface heat flow, providing constraints on both geodynamic history and ice-sheet stability. The ages and geothermal characteristics of the granites indicate that crust in central
Steerability of Hermite Kernel
Czech Academy of Sciences Publication Activity Database
Yang, Bo; Flusser, Jan; Suk, Tomáš
2013-01-01
Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf
Jakkareddy, Pradeep S.; Balaji, C.
2016-09-01
This paper employs the Bayesian based Metropolis Hasting - Markov Chain Monte Carlo algorithm to solve inverse heat transfer problem of determining the spatially varying heat transfer coefficient from a flat plate with flush mounted discrete heat sources with measured temperatures at the bottom of the plate. The Nusselt number is assumed to be of the form Nu = aReb(x/l)c . To input reasonable values of ’a’ and ‘b’ into the inverse problem, first limited two dimensional conjugate convection simulations were done with Comsol. Based on the guidance from this different values of ‘a’ and ‘b’ are input to a computationally less complex problem of conjugate conduction in the flat plate (15mm thickness) and temperature distributions at the bottom of the plate which is a more convenient location for measuring the temperatures without disturbing the flow were obtained. Since the goal of this work is to demonstrate the eficiacy of the Bayesian approach to accurately retrieve ‘a’ and ‘b’, numerically generated temperatures with known values of ‘a’ and ‘b’ are treated as ‘surrogate’ experimental data. The inverse problem is then solved by repeatedly using the forward solutions together with the MH-MCMC aprroach. To speed up the estimation, the forward model is replaced by an artificial neural network. The mean, maximum-a-posteriori and standard deviation of the estimated parameters ‘a’ and ‘b’ are reported. The robustness of the proposed method is examined, by synthetically adding noise to the temperatures.
DEFF Research Database (Denmark)
Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.
2013-01-01
correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean
Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.
2018-02-01
The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.
Estimation of the heat/Na flux using lidar data recorded at ALO, Cerro Pachon, Chile
Vargas, F.; Gardner, C. S.; Liu, A. Z.; Swenson, G. R.
2013-12-01
In this poster, lidar nigh-time data are used to estimate the vertical fluxes of heat and Na at the mesopause region due to dissipating gravity waves presenting periods from 5 min to 8 h, and vertical wavelengths > 2 km. About 60 hours of good quality data were recorded near the equinox during two observation campaigns held in Mar, 2012 and Apr, 2013 at the Andes Lidar Observatory (30.3S,70.7W). These first measurements of the heat/Na flux in the southern hemisphere will be discussed and compared with those from the northern hemisphere stations obtained at the Starfire Optical Range, NM, and Maui, HW.
Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.
Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit
2018-02-13
Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
A new method to estimate heat source parameters in gas metal arc welding simulation process
International Nuclear Information System (INIS)
Jia, Xiaolei; Xu, Jie; Liu, Zhaoheng; Huang, Shaojie; Fan, Yu; Sun, Zhi
2014-01-01
Highlights: •A new method for accurate simulation of heat source parameters was presented. •The partial least-squares regression analysis was recommended in the method. •The welding experiment results verified accuracy of the proposed method. -- Abstract: Heat source parameters were usually recommended by experience in welding simulation process, which induced error in simulation results (e.g. temperature distribution and residual stress). In this paper, a new method was developed to accurately estimate heat source parameters in welding simulation. In order to reduce the simulation complexity, a sensitivity analysis of heat source parameters was carried out. The relationships between heat source parameters and welding pool characteristics (fusion width (W), penetration depth (D) and peak temperature (T p )) were obtained with both the multiple regression analysis (MRA) and the partial least-squares regression analysis (PLSRA). Different regression models were employed in each regression method. Comparisons of both methods were performed. A welding experiment was carried out to verify the method. The results showed that both the MRA and the PLSRA were feasible and accurate for prediction of heat source parameters in welding simulation. However, the PLSRA was recommended for its advantages of requiring less simulation data
International Nuclear Information System (INIS)
Demeter, C.P.; Gray, E.E.; Carwile, C.
1991-01-01
This paper reports the results of a preliminary evaluation of the potential domestic market for solar thermal energy supply technologies matched to industrial process heat applications. The study estimates current and projects future industrial process heat demand to the year 2030 by two-digit standard industrial classification code for the manufacturing industrial sector and discusses the potential to displace conventional fossil fuel sources such as natural gas with alternative sources of supply. The PC Industrial Model, used by DOE's Energy Information Administration in support of the National Energy Strategy (NES) is used for forecast industrial energy demand. Demand is disaggregated by census region to account for geographic variations in solar insolation, and by heat medium and temperature to facilitate end-use matching with appropriate solar energy supply technologies. Levelized energy costs (LEC) are calculated for flat plate collectors for low- temperature preheat applications, parabolic troughs for intermediate temperature process steam and direct heat, and parabolic dish technologies for high-temperature, direct heat applications. LEC is also developed for a conventional natural gas-fueled Industrial Process Heat (IPH) supply source assuming natural gas price escalation consistent with NES forecasts to develop a relative figure of merit used in a market penetration model
Microclimate Variation and Estimated Heat Stress of Runners in the 2020 Tokyo Olympic Marathon
Directory of Open Access Journals (Sweden)
Eichi Kosaka
2018-05-01
Full Text Available The Tokyo 2020 Olympic Games will be held in July and August. As these are the hottest months in Tokyo, the risk of heat stress to athletes and spectators in outdoor sporting events is a serious concern. This study focuses on the marathon races, which are held outside for a prolonged time, and evaluates the potential heat stress of marathon runners using the COMFA (COMfort FormulA Human Heat Balance (HBB Model. The study applies a four-step procedure: (a measure the thermal environment along the marathon course; (b estimate heat stress on runners by applying COMFA; (c identify locations where runners may be exposed to extreme heat stress; and (d discuss measures to mitigate the heat stress on runners. On clear sunny days, the entire course is rated as ‘dangerous’ or ‘extremely dangerous’, and within the latter half of the course, there is a 10-km portion where values continuously exceed the extremely dangerous level. Findings illustrate which stretches have the highest need for mitigation measures, such as starting the race one hour earlier, allowing runners to run in the shade of buildings or making use of urban greenery including expanding the tree canopy.
Estimating population heat exposure and impacts on working people in conjunction with climate change
Kjellstrom, Tord; Freyberg, Chris; Lemke, Bruno; Otto, Matthias; Briggs, David
2018-03-01
Increased environmental heat levels as a result of climate change present a major challenge to the health, wellbeing and sustainability of human communities in already hot parts of this planet. This challenge has many facets from direct clinical health effects of daily heat exposure to indirect effects related to poor air quality, poor access to safe drinking water, poor access to nutritious and safe food and inadequate protection from disease vectors and environmental toxic chemicals. The increasing environmental heat is a threat to environmental sustainability. In addition, social conditions can be undermined by the negative effects of increased heat on daily work and life activities and on local cultural practices. The methodology we describe can be used to produce quantitative estimates of the impacts of climate change on work activities in countries and local communities. We show in maps the increasing heat exposures in the shade expressed as the occupational heat stress index Wet Bulb Globe Temperature. Some tropical and sub-tropical areas already experience serious heat stress, and the continuing heating will substantially reduce work capacity and labour productivity in widening parts of the world. Southern parts of Europe and the USA will also be affected. Even the lowest target for climate change (average global temperature change = 1.5 °C at representative concentration pathway (RCP2.6) will increase the loss of daylight work hour output due to heat in many tropical areas from less than 2% now up to more than 6% at the end of the century. A global temperature change of 2.7 °C (at RCP6.0) will double this annual heat impact on work in such areas. Calculations of this type of heat impact at country level show that in the USA, the loss of work capacity in moderate level work in the shade will increase from 0.17% now to more than 1.3% at the end of the century based on the 2.7 °C temperature change. The impact is naturally mainly occurring in the southern
Inverse estimation of heat flux and temperature on nozzle throat-insert inner contour
Energy Technology Data Exchange (ETDEWEB)
Chen, Tsung-Chien [Department of Power Vehicle and Systems Engineering, Chung Cheng Institute of Technology, National Defense University, Ta-Hsi, Tao-Yuan 33509 (China); Liu, Chiun-Chien [Chung Shan Institute of Science and Technology, Lung-Tan, Tao-Yuan 32526 (China)
2008-07-01
During the missile flight, the jet flow with high temperature comes from the heat flux of propellant burning. An enormous heat flux from the nozzle throat-insert inner contour conducted into the nozzle shell will degrade the material strength of nozzle shell and reduce the nozzle thrust efficiency. In this paper, an on-line inverse method based on the input estimation method combined with the finite-element scheme is proposed to inversely estimate the unknown heat flux on the nozzle throat-insert inner contour and the inner wall temperature by applying the temperature measurements of the nozzle throat-insert. The finite-element scheme can easily define the irregularly shaped boundary. The superior capability of the proposed method is demonstrated in two major time-varying estimation cases. The computational results show that the proposed method has good estimation performance and highly facilitates the practical implementation. An effective analytical method can be offered to increase the operation reliability and thermal-resistance layer design in the solid rocket motor. (author)
Smolka, Gert
1994-01-01
Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...
Directory of Open Access Journals (Sweden)
M. J. Savage
2010-01-01
Full Text Available The relatively recently developed scintillometry method, with a focus on the dual-beam surface layer scintillometer (SLS, allows boundary layer atmospheric turbulence, surface sensible heat and momentum flux to be estimated in real-time. Much of the previous research using the scintillometer method has involved the large aperture scintillometer method, with only a few studies using the SLS method. The SLS method has been mainly used by agrometeorologists, hydrologists and micrometeorologists for atmospheric stability and surface energy balance studies to obtain estimates of sensible heat from which evaporation estimates representing areas of one hectare or larger are possible. Other applications include the use of the SLS method in obtaining crucial input parameters for atmospheric dispersion and turbulence models. The SLS method relies upon optical scintillation of a horizontal laser beam between transmitter and receiver for a separation distance typically between 50 and 250 m caused by refractive index inhomogeneities in the atmosphere that arise from turbulence fluctuations in air temperature and to a much lesser extent the fluctuations in water vapour pressure. Measurements of SLS beam transmission allow turbulence of the atmosphere to be determined, from which sub-hourly, real-time and in situ path-weighted fluxes of sensible heat and momentum may be calculated by application of the Monin-Obukhov similarity theory. Unlike the eddy covariance (EC method for which corrections for flow distortion and coordinate rotation are applied, no corrections to the SLS measurements, apart from a correction for water vapour pressure, are applied. Also, path-weighted SLS estimates over the propagation path are obtained. The SLS method also offers high temporal measurement resolution and usually greater spatial coverage compared to EC, Bowen ratio energy balance, surface renewal and other sensible heat measurement methods. Applying the shortened surface
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
7 CFR 981.408 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...
7 CFR 981.8 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
International Nuclear Information System (INIS)
Hsi, C.-L.; Kuo, J.-T.
2008-01-01
Estimating solid residue gross burning rate and heating value burning in a power plant furnace is essential for adequate manipulation to achieve energy conversion optimization and plant performance. A model based on conservation equations of mass and thermal energy is established in this work to calculate the instantaneous gross burning rate and lower heating value of solid residue fired in a combustion chamber. Comparing the model with incineration plant control room data indicates that satisfactory predictions of fuel burning rates and heating values can be obtained by assuming the moisture-to-carbon atomic ratio (f/a) within the typical range from 1.2 to 1.8. Agreement between mass and thermal analysis and the bed-chemistry model is acceptable. The model would be useful for furnace fuel and air control strategy programming to achieve optimum performance in energy conversion and pollutant emission reduction
Estimation and harvesting of human heat power for wearable electronic devices
International Nuclear Information System (INIS)
Dziurdzia, P; Brzozowski, I; Bratek, P; Gelmuda, W; Kos, A
2016-01-01
The paper deals with the issue of self-powered wearable electronic devices that are capable of harvesting free available energy dissipated by the user in the form of human heat. The free energy source is intended to be used as a secondary power source supporting primary battery in a sensor bracelet. The main scope of the article is a presentation of the concept for a measuring setup used to quantitative estimation of heat power sources in different locations over the human body area. The crucial role in the measurements of the human heat plays a thermoelectric module working in the open circuit mode. The results obtained during practical tests are confronted with the requirements of the dedicated thermoelectric generator. A prototype design of a human warmth energy harvester with an ultra-low power DC-DC converter based on the LTC3108 circuit is analysed
Cost estimation of hydrogen and DME produced by nuclear heat utilization system II
International Nuclear Information System (INIS)
Shiina, Yasuaki; Nishihara, Tetsuo
2004-09-01
Utilization and production of hydrogen has been studied in order to spread utilization of the hydrogen energy in 2020 or 2030. It will take, however, many years for the hydrogen energy to be used very easily like gasoline, diesel oil and city gas in the world. During the periods, low CO 2 release liquid fuels would be used together with hydrogen. Recently, di-methyl-ether (DME). has been noticed as one of the substitute liquid fuels of petroleum. Such liquid fuels can be produced from the mixed gas such as hydrogen and carbon oxide which are produced from natural gas by steam reforming. Therefore, the system would become one of the candidates of future system of nuclear heat utilization. Following the study in 2002, we performed economic evaluation of the hydrogen and DME production by nuclear heat utilization plant where heat generated by HTGR is completely consumed for the production. The results show that hydrogen price produced by nuclear was about 17% cheaper than the commercial price by increase in recovery rate of high purity hydrogen with increased in PSA process. Price of DME in indirect method produced by nuclear heat was also about 17% cheaper than the commercial price by producing high purity hydrogen in the DME producing process. As for the DME, since price of DME produced near oil land in petroleum exporting countries is cheaper than production in Japan, production of DME by nuclear heat in Japan has disadvantage economically in this time. Trial study to estimate DME price produced by direct method was performed. From the present estimation, utilization of nuclear heat for the production of hydrogen would be more effective with coupled consideration of reduction effect of CO 2 release. (author)
M K, Harsha Kumar; P S, Vishweshwara; N, Gnanasekaran; C, Balaji
2018-05-01
The major objectives in the design of thermal systems are obtaining the information about thermophysical, transport and boundary properties. The main purpose of this paper is to estimate the unknown heat flux at the surface of a solid body. A constant area mild steel fin is considered and the base is subjected to constant heat flux. During heating, natural convection heat transfer occurs from the fin to ambient. The direct solution, which is the forward problem, is developed as a conjugate heat transfer problem from the fin and the steady state temperature distribution is recorded for any assumed heat flux. In order to model the natural convection heat transfer from the fin, an extended domain is created near the fin geometry and air is specified as a fluid medium and Navier Stokes equation is solved by incorporating the Boussinesq approximation. The computational time involved in executing the forward model is then reduced by developing a neural network (NN) between heat flux values and temperatures based on back propagation algorithm. The conjugate heat transfer NN model is now coupled with Genetic algorithm (GA) for the solution of the inverse problem. Initially, GA is applied to the pure surrogate data, the results are then used as input to the Levenberg- Marquardt method and such hybridization is proven to result in accurate estimation of the unknown heat flux. The hybrid method is then applied for the experimental temperature to estimate the unknown heat flux. A satisfactory agreement between the estimated and actual heat flux is achieved by incorporating the hybrid method.
Estimating thermal diffusivity and specific heat from needle probe thermal conductivity data
Waite, W.F.; Gilbert, L.Y.; Winters, W.J.; Mason, D.H.
2006-01-01
Thermal diffusivity and specific heat can be estimated from thermal conductivity measurements made using a standard needle probe and a suitably high data acquisition rate. Thermal properties are calculated from the measured temperature change in a sample subjected to heating by a needle probe. Accurate thermal conductivity measurements are obtained from a linear fit to many tens or hundreds of temperature change data points. In contrast, thermal diffusivity calculations require a nonlinear fit to the measured temperature change occurring in the first few tenths of a second of the measurement, resulting in a lower accuracy than that obtained for thermal conductivity. Specific heat is calculated from the ratio of thermal conductivity to diffusivity, and thus can have an uncertainty no better than that of the diffusivity estimate. Our thermal conductivity measurements of ice Ih and of tetrahydrofuran (THF) hydrate, made using a 1.6 mm outer diameter needle probe and a data acquisition rate of 18.2 pointss, agree with published results. Our thermal diffusivity and specific heat results reproduce published results within 25% for ice Ih and 3% for THF hydrate. ?? 2006 American Institute of Physics.
2016-10-12
Metallurgy , 2nd Ed., John Wiley & Sons, Inc., 2003. DOI: 10.1002/0471434027. 2. O. Grong, Metallurgical Modelling of Welding , 2ed., Materials Modelling...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6394--16-9690 Validation of Temperature Histories for Structural Steel Welds Using...PAGES 17. LIMITATION OF ABSTRACT Validation of Temperature Histories for Structural Steel Welds Using Estimated Heat-Affected-Zone Edges S.G. Lambrakos
Estimation of surface absorptivity in laser surface heating process with experimental data
International Nuclear Information System (INIS)
Chen, H-T; Wu, X-Y
2006-01-01
This study applies a hybrid technique of the Laplace transform and finite-difference methods in conjunction with the least-squares method and experimental temperature data inside the test material to predict the unknown surface temperature, heat flux and absorptivity for various surface conditions in the laser surface heating process. In this study, the functional form of the surface temperature is unknown a priori and is assumed to be a function of time before performing the inverse calculation. In addition, the whole time domain is divided into several analysis sub-time intervals and then these unknown estimates on each analysis interval can be predicted. In order to show the accuracy of the present inverse method, comparisons are made among the present estimates, direct results and previous results, showing that the present estimates agree with the direct results for the simulated problem. However, the present estimates of the surface absorptivity deviate slightly from previous estimated results under the assumption of constant thermal properties. The effect of the surface conditions on the surface absorptivity and temperature is not negligible
Heat flux estimation in an infrared experimental furnace using an inverse method
International Nuclear Information System (INIS)
Le Bideau, P.; Ploteau, J.P.; Glouannec, P.
2009-01-01
Infrared emitters are widely used in industrial furnaces for thermal treatment. In these processes, the knowledge of the incident heat flux on the surface of the product is a primary step to optimise the command emitters and for maintenance shift. For these reasons, it is necessary to develop autonomous flux meters that could provide an answer to these requirements. These sensors must give an in-line distribution of infrared irradiation in the tunnel furnace and must be able to measure high heat flux in severe thermal environments. In this paper we present a method for in-line assessments solving an inverse heat conduction problem. A metallic mass is instrumented by thermocouples and an inverse method allows the incident heat flux to be estimated. In the first part, attention is focused on a new design tool, which is a numerical code, for the evaluation of potential options during captor conception. In the second part we present the realization and the test of this 'indirect' flux meter and its associated inverse problem. 'Direct' detectors based on thermoelectric devices are compared with this new flux meter in the same conditions in the same furnace. Results prove that this technique is a reliable method, appropriate for high temperature ambiances. This technique can be applied to furnaces where the heat flux is inaccessible to 'direct' measurements.
Real-Time Personalized Monitoring to Estimate Occupational Heat Stress in Ambient Assisted Working
Directory of Open Access Journals (Sweden)
Pablo Pancardo
2015-07-01
Full Text Available Ambient Assisted Working (AAW is a discipline aiming to provide comfort and safety in the workplace through customization and technology. Workers’ comfort may be compromised in many labor situations, including those depending on environmental conditions, like extremely hot weather conduces to heat stress. Occupational heat stress (OHS happens when a worker is in an uninterrupted physical activity and in a hot environment. OHS can produce strain on the body, which leads to discomfort and eventually to heat illness and even death. Related ISO standards contain methods to estimate OHS and to ensure the safety and health of workers, but they are subjective, impersonal, performed a posteriori and even invasive. This paper focuses on the design and development of real-time personalized monitoring for a more effective and objective estimation of OHS, taking into account the individual user profile, fusing data from environmental and unobtrusive body sensors. Formulas employed in this work were taken from different domains and joined in the method that we propose. It is based on calculations that enable continuous surveillance of physical activity performance in a comfortable and healthy manner. In this proposal, we found that OHS can be estimated by satisfying the following criteria: objective, personalized, in situ, in real time, just in time and in an unobtrusive way. This enables timely notice for workers to make decisions based on objective information to control OHS.
Reconciling estimates of the ratio of heat and salt fluxes at the ice-ocean interface
Keitzl, T.; Mellado, J. P.; Notz, D.
2016-12-01
The heat exchange between floating ice and the underlying ocean is determined by the interplay of diffusive fluxes directly at the ice-ocean interface and turbulent fluxes away from it. In this study, we examine this interplay through direct numerical simulations of free convection. Our results show that an estimation of the interface flux ratio based on direct measurements of the turbulent fluxes can be difficult because the flux ratio varies with depth. As an alternative, we present a consistent evaluation of the flux ratio based on the total heat and salt fluxes across the boundary layer. This approach allows us to reconcile previous estimates of the ice-ocean interface conditions. We find that the ratio of heat and salt fluxes directly at the interface is 83-100 rather than 33 as determined by previous turbulence measurements in the outer layer. This can cause errors in the estimated ice-ablation rate from field measurements of up to 40% if they are based on the three-equation formulation.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Directory of Open Access Journals (Sweden)
Freire F. B.
2004-01-01
Full Text Available This work is concerned with the coupled estimation of the heat generated by the reaction (Qr and the overall heat transfer parameter (UA during the terpolymerization of styrene, butyl acrylate and methyl methacrylate from temperature measurements and the reactor heat balance. By making specific assumptions about the dynamics of the evolution of UA and Q R, we propose a cascade of observers to successively estimate these two parameters without the need for additional measurements of on-line samples. One further aspect of our approach is that only the energy balance around the reactor was employed. It means that the flow rate of the cooling jacket fluid was not required.
Estimating heats of detonation and detonation velocities of aromatic energetic compounds
Energy Technology Data Exchange (ETDEWEB)
Keshavarz, Mohammad Hossein [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr, P. O. Box 83145/115 (Iran)
2008-12-15
A new method is introduced to predict reliable estimation of heats of detonation of aromatic energetic compounds. At first step, this procedure assumes that the heat of detonation of an explosive compound of composition C{sub a}H{sub b}N{sub c}O{sub d} can be approximated as the difference between the heat of formation of all H{sub 2}O-CO{sub 2} arbitrary (H{sub 2}O, CO{sub 2}, N{sub 2}) detonation products and that of the explosive, divided by the formula weight of the explosive. Overestimated results based on (H{sub 2}O-CO{sub 2} arbitrary) can be corrected in the next step. Predicted heats of detonation of pure energetic compounds with the product H{sub 2}O in the liquid state for 31 aromatic energetic compounds have a root mean square (rms) deviation of 2.08 and 0.34 kJ g{sup -1} from experiment for (H{sub 2}O-CO{sub 2} arbitrary) and new method, respectively. The new method also gives good results as compared to the second sets of decomposition products, which consider H{sub 2},N{sub 2}, H{sub 2}O,CO, and CO{sub 2} as major gaseous products. It is shown here how the predicted heats of detonation by the new method can be used to obtain reliable estimation of detonation velocity over a wide range of loading densities. (Abstract Copyright [2008], Wiley Periodicals, Inc.)
Input Space Regularization Stabilizes Pre-images for Kernel PCA De-noising
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2009-01-01
Solution of the pre-image problem is key to efficient nonlinear de-noising using kernel Principal Component Analysis. Pre-image estimation is inherently ill-posed for typical kernels used in applications and consequently the most widely used estimation schemes lack stability. For de...
Fuqua-Haviland, H.; Panovska, S.; Mallik, A.; Bremner, P. M.; McDonough, W. F.
2017-12-01
Constraining the heat producing element (HPE) concentrations of the Moon is important for understanding the thermal state of the interior. The lunar HPE budget is debated to be suprachondritic [1] to chondritic [2]. The Moon is differentiated, thus, each reservoir has a distinct HPE signature complicating this effort. The thermal profile of the lunar interior has been constructed using HPE concentrations of an ordinary chondrite (U = 0.0068 ppm; Th = 0.025 ppm; K = 17 ppm) which yields a conservative low estimate [2, 3, 4]. A later study estimated the bulk lunar mantle HPE concentrations (U = 0.039 ppm; Th = 0.15 ppm; K = 212 ppm) based on measurements of Apollo pyroclastic glasses [5] assuming that these glasses represent the least fractionated, near-primary lunar mantle melts, hence, are the best proxies for capturing mantle composition. In this study, we independently validate the revised estimate by using HPE concentrations [5] to construct a conductive lunar thermal profile, or selenotherm. We compare our conductive profile to the range of valid temperatures. We demonstrate the HPE concentrations reported by [5], when used in a simple 1D spherical thermal conduction equation, yield an impossibly hot mantle with temperatures in excess of 4,000 K (Fig 1). This confirms their revised estimate is not representative of the bulk lunar mantle, and perhaps only representative of a locally enriched mantle domain. We believe that their Low-Ti avg. source estimate (Th = 0.055 ppm, Th/U=4; K/U=1700), with the least KREEP assimilation is the closest representation of the bulk lunar mantle, producing 3E-12 W/kg of heat. This estimate is close to that of the Earth (5E-12 W/kg), indicating that the bulk Earth and lunar mantles are similar in their HPE constituents. We have used the lunar mantle heat production, in conjunction with HPE estimates of the Fe-Ti-rich cumulates (high Ti-source estimate from [5]) and measurements of crustal ferroan anorthite [6], to capture the
Estimating the Condition of the Heat Resistant Lining in an Electrical Reduction Furnace
Directory of Open Access Journals (Sweden)
Jan G. Waalmann
1988-01-01
Full Text Available This paper presents a system for estimating the condition of the heat resistant lining in an electrical reduction furnace for ferrosilicon. The system uses temperature measured with thermocouples placed on the outside of the furnace-pot. These measurements are used together with a mathematical model of the temperature distribution in the lining in a recursive least squares algorithm to estimate the position of 'the transformation front'. The system is part of a monitoring system which is being developed in the AIP-project: 'Condition monitoring of strongly exposed process equipment in thc ferroalloy industry'. The estimator runs on-line, and results arc presented in colour-graphics on a display unit. The goal is to locate the transformation front with an accuracy of +- 5cm.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Kernel-based tests for joint independence
DEFF Research Database (Denmark)
Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard
2018-01-01
if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...
Kernel-based noise filtering of neutron detector signals
International Nuclear Information System (INIS)
Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki
2007-01-01
This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
International Nuclear Information System (INIS)
Koh, Jae Hwa; Yoon, Duck Joo
2009-01-01
As a part of the project 'development of hydrogen production technologies by high temperature electrolysis using very high temperature reactor', we have developed an electrolyzer model for high temperature steam electrolysis (HTSE) system and carried out some preliminary estimations on the effects of heat recovery on the HTSE hydrogen production system. To produce massive hydrogen by using nuclear energy, the HTSE process is one of the promising technologies with sulfur-iodine and hybrid sulfur process. The HTSE produces hydrogen through electrochemical reaction within the solid oxide electrolysis cell (SOEC), which is a reverse reaction of solid oxide fuel cell (SOFC). The HTSE system generally operates in the temperature range of 700∼900 .deg. C. Advantages of HTSE hydrogen production are (a) clean hydrogen production from water without carbon oxide emission, (b) synergy effect due to using the current SOFC technology and (c) higher thermal efficiency of system when it is coupled nuclear reactor. Since the HTSE system operates over 700 .deg. C, the use of heat recovery is an important consideration for higher efficiency. In this paper, four different heat recovery configurations for the HTSE system have been investigated and estimated
Structural observability analysis and EKF based parameter estimation of building heating models
Directory of Open Access Journals (Sweden)
D.W.U. Perera
2016-07-01
Full Text Available Research for enhanced energy-efficient buildings has been given much recognition in the recent years owing to their high energy consumptions. Increasing energy needs can be precisely controlled by practicing advanced controllers for building Heating, Ventilation, and Air-Conditioning (HVAC systems. Advanced controllers require a mathematical building heating model to operate, and these models need to be accurate and computationally efficient. One main concern associated with such models is the accurate estimation of the unknown model parameters. This paper presents the feasibility of implementing a simplified building heating model and the computation of physical parameters using an off-line approach. Structural observability analysis is conducted using graph-theoretic techniques to analyze the observability of the developed system model. Then Extended Kalman Filter (EKF algorithm is utilized for parameter estimates using the real measurements of a single-zone building. The simulation-based results confirm that even with a simple model, the EKF follows the state variables accurately. The predicted parameters vary depending on the inputs and disturbances.
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
A single-probe heat pulse method for estimating sap velocity in trees.
López-Bernal, Álvaro; Testi, Luca; Villalobos, Francisco J
2017-10-01
Available sap flow methods are still far from being simple, cheap and reliable enough to be used beyond very specific research purposes. This study presents and tests a new single-probe heat pulse (SPHP) method for monitoring sap velocity in trees using a single-probe sensor, rather than the multi-probe arrangements used up to now. Based on the fundamental conduction-convection principles of heat transport in sapwood, convective velocity (V h ) is estimated from the temperature increase in the heater after the application of a heat pulse (ΔT). The method was validated against measurements performed with the compensation heat pulse (CHP) technique in field trees of six different species. To do so, a dedicated three-probe sensor capable of simultaneously applying both methods was produced and used. Experimental measurements in the six species showed an excellent agreement between SPHP and CHP outputs for moderate to high flow rates, confirming the applicability of the method. In relation to other sap flow methods, SPHP presents several significant advantages: it requires low power inputs, it uses technically simpler and potentially cheaper instrumentation, the physical damage to the tree is minimal and artefacts caused by incorrect probe spacing and alignment are removed. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
An improved routine for the fast estimate of ion cyclotron heating efficiency in tokamak plasmas
International Nuclear Information System (INIS)
Brambilla, M.
1992-02-01
The subroutine ICEVAL for the rapid simulation of Ion Cyclotron Heating in tokamak plasmas is based on analytic estimates of the wave behaviour near resonances, and on drastic but reasonable simplifications of the real geometry. The subroutine has been rewritten to improve the model and to facilitate its use as input in transport codes. In the new version the influence of quasilinear minority heating on the damping efficiency is taken into account using the well-known Stix analytic approximation. Among other improvements are: a) the possibility of considering plasmas with more than two ion species; b) inclusion of Landau, Transit Time and collisional damping on the electrons non localised at resonances; c) better models for the antenna spectrum and for the construction of the power deposition profiles. The results of ICEVAL are compared in detail with those of the full-wave code FELICE for the case of Hydrogen minority heating in a Deuterium plasma; except for details which depend on the excitation of global eigenmodes, agreement is excellent. ICEVAL is also used to investigate the enhancement of the absorption efficiency due to quasilinear heating of the minority ions. The effect is a strongly non-linear function of the available power, and decreases rapidly with increasing concentration. For parameters typical of Asdex Upgrade plasmas, about 4 MW are required to produce a significant increase of the single-pass absorption at concentrations between 10 and 20%. (orig.)
Estimation of catchment averaged sensible heat fluxes using a large aperture scintillometer
Directory of Open Access Journals (Sweden)
Samain Bruno
2012-05-01
Full Text Available Evapotranspiration rates at the catchment scale are very difficult to quantify. One possible manner to continuously observe this variable could be the estimation of sensible heat fluxes (H across large distances (in the order of kilometers using a large aperture scintillometer (LAS, and inverting these observations into evapotranspiration rates, under the assumption that the LAS observations are representative for the entire catchment. The objective of this paper is to assess whether measured sensible heat fluxes from a LAS over a long distance (9.5 km can be assumed to be valid for a 102.3 km2 heterogeneous catchment. Therefore, a fully process-based water and energy balance model with a spatial resolution of 50 m has been thoroughly calibrated and validated for the Bellebeek catchmentin Belgium. A footprint analysis has been performed. In general, the sensible heat fluxes from the LAS compared well with the modeled sensible heat fluxes within the footprint. Moreover, as the modeled Hwithin the footprint has been found to be almost equal to the modeled catchment averaged H, it can be concluded that the scintillometer measurements over a distance of 9.5 km and an effective heightof 68 m are representative for the entire catchment.
Two-wavelength Method Estimates Heat fluxes over Heterogeneous Surface in North-China
Zhang, G.; Zheng, N.; Zhang, J.
2017-12-01
Heat fluxes is a key process of hydrological and heat transfer of soil-plant-atmosphere continuum (SPAC), and now it is becoming an important topic in meteorology, hydrology, ecology and other related research areas. Because the temporal and spatial variation of fluxes at regional scale is very complicated, it is still difficult to measure fluxes at the kilometer scale over a heterogeneous surface. A technique called "two-wavelength method" which combines optical scintillometer with microwave scintillometer is able to measure both sensible and latent heat fluxes over large spatial scales at the same time. The main purpose of this study is to investigate the fluxes over non-uniform terrain in North-China. Estimation of heat fluxes was carried out with the optical-microwave scintillometer and an eddy covariance (EC) system over heterogeneous surface in Tai Hang Mountains, China. EC method was set as a benchmark in the study. Structure parameters obtained from scintillometer showed that the typical measurement values of Cn2 are around 10-13 m-2/3 for microwave scintillometer, and values of Cn2 were around 10-15 m-2/3 for optical scintillometer. The correlation of heat fluxes (H) derived from scintillometer and EC system showed as a ratio of 1.05,and with R2=0.75, while the correlation of latent heat fluxes (LE) showed as 1.29 with R2=0.67. It was also found that heat fluxes derived from the two system showed good agreement (R2=0.9 for LE, R2=0.97 for H) when the Bowen ratio (β) was 1.03, while discrepancies showed significantly when β=0.75, and RMSD in H was 139.22 W/m2, 230.85 W/m2 in LE respectively.Experiment results in our research shows that, the two-wavelength method gives a larger heat fluxes over the study area, and a deeper study should be conduct. We expect that our investigate and analysis can be promoted the application of scintillometry method in regional evapotranspiration measurements and relevant disciplines.
Regularization and error estimates for asymmetric backward nonhomogeneous heat equations in a ball
Directory of Open Access Journals (Sweden)
Le Minh Triet
2016-09-01
Full Text Available The backward heat problem (BHP has been researched by many authors in the last five decades; it consists in recovering the initial distribution from the final temperature data. There are some articles [1,2,3] related the axi-symmetric BHP in a disk but the study in spherical coordinates is rare. Therefore, we wish to study a backward problem for nonhomogenous heat equation associated with asymmetric final data in a ball. In this article, we modify the quasi-boundary value method to construct a stable approximate solution for this problem. As a result, we obtain regularized solution and a sharp estimates for its error. At the end, a numerical experiment is provided to illustrate our method.
Hermans, Thomas; Daoudi, Moubarak; Vandenbohede, Alexander; Robert, Tanguy; Caterina, David; Nguyen, Frédéric
2012-01-01
Groundwater resources are increasingly used around the world as geothermal systems. Understanding physical processes and quantification of parameters determining heat transport in porous media is therefore important. Geophysical methods may be useful in order to yield additional information with greater coverage than conventional wells. We report a heat transport study during a shallow heat injection and storage field test. Heated water (about 50°C) was injected for 6 days at the rate of 80 l...
Estimation of combustion flue gas acid dew point during heat recovery and efficiency gain
Energy Technology Data Exchange (ETDEWEB)
Bahadori, A. [Curtin University of Technology, Perth, WA (Australia)
2011-06-15
When cooling combustion flue gas for heat recovery and efficiency gain, the temperature must not be allowed to drop below the sulfur trioxide dew point. Below the SO{sub 3} dew point, very corrosive sulfuric acid forms and leads to operational hazards on metal surfaces. In the present work, simple-to-use predictive tool, which is easier than existing approaches, less complicated with fewer computations is formulated to arrive at an appropriate estimation of acid dew point during combustion flue gas cooling which depends on fuel type, sulfur content in fuel, and excess air levels. The resulting information can then be applied to estimate the acid dew point, for sulfur in various fuels up to 0.10 volume fraction in gas (0.10 mass fraction in liquid), excess air fractions up to 0.25, and elemental concentrations of carbon up to 3. The proposed predictive tool shows a very good agreement with the reported data wherein the average absolute deviation percent was found to be around 3.18%. This approach can be of immense practical value for engineers and scientists for a quick estimation of acid dew point during combustion flue gas cooling for heat recovery and efficiency gain for wide range of operating conditions without the necessity of any pilot plant setup and tedious experimental trials. In particular, process and combustion engineers would find the tool to be user friendly involving transparent calculations with no complex expressions for their applications.
Energy Technology Data Exchange (ETDEWEB)
Kaliatka, Tadas; Kaliatka, Algirdas; Uspuras, Eudenijus; Vaisnoras, Mindaugas [Lithuanian Energy Institute, Kaunas (Lithuania); Mochizuki, Hiroyasu; Rooijen, W.F.G. van [Fukui Univ. (Japan). Research Inst. of Nuclear Engineering
2017-05-15
Because of the uncertainties associated with the definition of Critical Heat Flux (CHF), the best estimate approach should be used. In this paper the application of best-estimate approach for the analysis of CHF phenomenon in the boiling water reactors is presented. At first, the nodalization of RBMK-1500, BWR-5 and ABWR fuel assemblies were developed using RELAP5 code. Using developed models the CHF and Critical Heat Flux Ratio (CHFR) for different types of reactors were evaluated. The calculation results of CHF were compared with the well-known experimental data for light water reactors. The uncertainty and sensitivity analysis of ABWR 8 x 8 fuel assembly CHFR calculation result was performed using the GRS (Germany) methodology with the SUSA tool. Finally, the values of Minimum Critical Power Ratio (MCPR) were calculated for RBMK-1500, BWR-5 and ABWR fuel assemblies. The paper demonstrate how, using the results of sensitivity analysis, to receive the MCPR values, which covers all uncertainties and remains best estimated.
Preparation and characterization of active carbon using palm kernel ...
African Journals Online (AJOL)
Activated carbons were prepared from Palm kernel shells. Carbonization temperature was 6000C, at a residence time of 5 min for each process. Chemical activation was done by heating a mixture of carbonized material and the activating agents at a temperature of 700C to form a paste, followed by subsequent cooling and ...
Guo, Zhouchao; Lu, Tao; Liu, Bo
2017-04-01
Turbulent penetration can occur when hot and cold fluids mix in a horizontal T-junction pipe at nuclear plants. Caused by the unstable turbulent penetration, temperature fluctuations with large amplitude and high frequency can lead to time-varying wall thermal stress and even thermal fatigue on the inner wall. Numerous cases, however, exist where inner wall temperatures cannot be measured and only outer wall temperature measurements are feasible. Therefore, it is one of the popular research areas in nuclear science and engineering to estimate temperature fluctuations on the inner wall from measurements of outer wall temperatures without damaging the structure of the pipe. In this study, both the one-dimensional (1D) and the two-dimensional (2D) inverse heat conduction problem (IHCP) were solved to estimate the temperature fluctuations on the inner wall. First, numerical models of both the 1D and the 2D direct heat conduction problem (DHCP) were structured in MATLAB, based on the finite difference method with an implicit scheme. Second, both the 1D IHCP and the 2D IHCP were solved by the steepest descent method (SDM), and the DHCP results of temperatures on the outer wall were used to estimate the temperature fluctuations on the inner wall. Third, we compared the temperature fluctuations on the inner wall estimated by the 1D IHCP with those estimated by the 2D IHCP in four cases: (1) when the maximum disturbance of temperature of fluid inside the pipe was 3°C, (2) when the maximum disturbance of temperature of fluid inside the pipe was 30°C, (3) when the maximum disturbance of temperature of fluid inside the pipe was 160°C, and (4) when the fluid temperatures inside the pipe were random from 50°C to 210°C.
Mixture Density Mercer Kernels: A Method to Learn Kernels
National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...
Stem heat balance method to estimate transpiration of young orange and mango plants
Vellame,Lucas M.; Coelho Filho,Maurício A.; Paz,Vital P. S.; Coelho,Eugênio F.
2010-01-01
The present study had as its main objective the evaluation of the heat balance method in young orange and mango plants under protected environment. The work was carried out at Embrapa Cassava and Tropical Fruits, Cruz das Almas, BA. Later on, estimates of sap flow were conducted for two mango plants cultivated in pots of 15 and 50 L installed on weighting platforms of 45 and 140 kg; sap flow was determined in three orange plants, two of which were also installed on weighing platforms. The val...
Residual heat estimation by using Cherenkov radiation in Tehran Research Reactor
Energy Technology Data Exchange (ETDEWEB)
Arkani, M. [Department of Nuclear Engineering, Azad University, Tehran (Iran, Islamic Republic of); Gharib, M. [Tehran Research Reactor, Nuclear Science and Technology Research Institute (NSTRI), Tehran 14395-836 (Iran, Islamic Republic of)], E-mail: mgharib@aeoi.org.ir
2008-11-11
An experiment is set up in Tehran 5 MW research reactor to observe Cherenkov radiation response during post-shutdown periods. An ordinary PC camera is used for this purpose. Theoretical estimation of the total power including decay heat and neutronic power is checked against detector response. A general agreement suggests that the same setup could equally serve as an independent channel for similar purposes in other reactors. This suggested that a similar setup based on present experience could be utilized in other reactors especially with the aim of fuel surveillance and monitoring.
Residual heat estimation by using Cherenkov radiation in Tehran Research Reactor
International Nuclear Information System (INIS)
Arkani, M.; Gharib, M.
2008-01-01
An experiment is set up in Tehran 5 MW research reactor to observe Cherenkov radiation response during post-shutdown periods. An ordinary PC camera is used for this purpose. Theoretical estimation of the total power including decay heat and neutronic power is checked against detector response. A general agreement suggests that the same setup could equally serve as an independent channel for similar purposes in other reactors. This suggested that a similar setup based on present experience could be utilized in other reactors especially with the aim of fuel surveillance and monitoring.
Lee, C. M.; Addy, H. E.; Bond, T. H.; Chun, K. S.; Lu, C. Y.
1987-01-01
The main objective of this report was to derive equations to estimate heat transfer coefficients in both the combustion chamber and coolant pasage of a rotary engine. This was accomplished by making detailed temperature and pressure measurements in a direct-injection stratified-charge rotary engine under a range of conditions. For each sppecific measurement point, the local physical properties of the fluids were calculated. Then an empirical correlation of the coefficients was derived by using a multiple regression program. This correlation expresses the Nusselt number as a function of the Prandtl number and Reynolds number.
Schaeck, S.; Karspeck, T.; Ott, C.; Weckler, M.; Stoermer, A. O.
2011-03-01
In March 2007 the BMW Group has launched the micro-hybrid functions brake energy regeneration (BER) and automatic start and stop function (ASSF). Valve-regulated lead-acid (VRLA) batteries in absorbent glass mat (AGM) technology are applied in vehicles with micro-hybrid power system (MHPS). In both part I and part II of this publication vehicles with MHPS and AGM batteries are subject to a field operational test (FOT). Test vehicles with conventional power system (CPS) and flooded batteries were used as a reference. In the FOT sample batteries were mounted several times and electrically tested in the laboratory intermediately. Vehicle- and battery-related diagnosis data were read out for each test run and were matched with laboratory data in a data base. The FOT data were analyzed by the use of two-dimensional, nonparametric kernel estimation for clear data presentation. The data show that capacity loss in the MHPS is comparable to the CPS. However, the influence of mileage performance, which cannot be separated, suggests that battery stress is enhanced in the MHPS although a battery refresh function is applied. Anyway, the FOT demonstrates the unsuitability of flooded batteries for the MHPS because of high early capacity loss due to acid stratification and because of vanishing cranking performance due to increasing internal resistance. Furthermore, the lack of dynamic charge acceptance for high energy regeneration efficiency is illustrated. Under the presented FOT conditions charge acceptance of lead-acid (LA) batteries decreases to less than one third for about half of the sample batteries compared to new battery condition. In part II of this publication FOT data are presented by multiple regression analysis (Schaeck et al., submitted for publication [1]).
Ershadi, Ali; McCabe, Matthew; Evans, Jason P.; Mariethoz, Gregoire; Kavetski, Dmitri
2013-01-01
The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...
A new discrete dipole kernel for quantitative susceptibility mapping.
Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian
2018-09-01
Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.
A kernel version of spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
. Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
kernel oil by lipolytic organisms
African Journals Online (AJOL)
USER
2010-08-02
Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.
THE ASSESSMENT OF GEOTHERMAL POTENTIAL OF TURKEY BY MEANS OF HEAT FLOW ESTIMATION
Directory of Open Access Journals (Sweden)
UĞUR AKIN
2014-12-01
Full Text Available In this study, the heat flow distribution of Turkey was investigated in the interest ofexploring new geothermal fields in addition to known ones. For this purposes, thegeothermal gradient was estimated from the Curie point depth map obtained from airbornemagnetic data by means of power spectrum method. By multiplying geothermal gradientwith thermal conductivity values, the heat flow map of Turkey was obtained. The averagevalue in the heat flow map of Turkey was determined as 74 mW/m2. It points out existenceof resources of geothermal energy larger than the average of the world resources. in termsof geothermal potential, the most significant region of Turkey is the Aydin and itssurrounding with a value exceeding 200 mW/m2. On the contrary, the value decreasesbelow 30 mW/m2in the region bordered by Aksaray, Niğde, Karaman and Konya. Thenecessity of conducting a detailed additional studies for East Black sea, East and SoutheastAnatolia is also revealed
Bröde, Peter; Fiala, Dusan; Lemke, Bruno; Kjellstrom, Tord
2018-03-01
With a view to occupational effects of climate change, we performed a simulation study on the influence of different heat stress assessment metrics on estimated workability (WA) of labour in warm outdoor environments. Whole-day shifts with varying workloads were simulated using as input meteorological records for the hottest month from four cities with prevailing hot (Dallas, New Delhi) or warm-humid conditions (Managua, Osaka), respectively. In addition, we considered the effects of adaptive strategies like shielding against solar radiation and different work-rest schedules assuming an acclimated person wearing light work clothes (0.6 clo). We assessed WA according to Wet Bulb Globe Temperature (WBGT) by means of an empirical relation of worker performance from field studies (Hothaps), and as allowed work hours using safety threshold limits proposed by the corresponding standards. Using the physiological models Predicted Heat Strain (PHS) and Universal Thermal Climate Index (UTCI)-Fiala, we calculated WA as the percentage of working hours with body core temperature and cumulated sweat loss below standard limits (38 °C and 7.5% of body weight, respectively) recommended by ISO 7933 and below conservative (38 °C; 3%) and liberal (38.2 °C; 7.5%) limits in comparison. ANOVA results showed that the different metrics, workload, time of day and climate type determined the largest part of WA variance. WBGT-based metrics were highly correlated and indicated slightly more constrained WA for moderate workload, but were less restrictive with high workload and for afternoon work hours compared to PHS and UTCI-Fiala. Though PHS showed unrealistic dynamic responses to rest from work compared to UTCI-Fiala, differences in WA assessed by the physiological models largely depended on the applied limit criteria. In conclusion, our study showed that the choice of the heat stress assessment metric impacts notably on the estimated WA. Whereas PHS and UTCI-Fiala can account for
Bröde, Peter; Fiala, Dusan; Lemke, Bruno; Kjellstrom, Tord
2018-03-01
With a view to occupational effects of climate change, we performed a simulation study on the influence of different heat stress assessment metrics on estimated workability (WA) of labour in warm outdoor environments. Whole-day shifts with varying workloads were simulated using as input meteorological records for the hottest month from four cities with prevailing hot (Dallas, New Delhi) or warm-humid conditions (Managua, Osaka), respectively. In addition, we considered the effects of adaptive strategies like shielding against solar radiation and different work-rest schedules assuming an acclimated person wearing light work clothes (0.6 clo). We assessed WA according to Wet Bulb Globe Temperature (WBGT) by means of an empirical relation of worker performance from field studies (Hothaps), and as allowed work hours using safety threshold limits proposed by the corresponding standards. Using the physiological models Predicted Heat Strain (PHS) and Universal Thermal Climate Index (UTCI)-Fiala, we calculated WA as the percentage of working hours with body core temperature and cumulated sweat loss below standard limits (38 °C and 7.5% of body weight, respectively) recommended by ISO 7933 and below conservative (38 °C; 3%) and liberal (38.2 °C; 7.5%) limits in comparison. ANOVA results showed that the different metrics, workload, time of day and climate type determined the largest part of WA variance. WBGT-based metrics were highly correlated and indicated slightly more constrained WA for moderate workload, but were less restrictive with high workload and for afternoon work hours compared to PHS and UTCI-Fiala. Though PHS showed unrealistic dynamic responses to rest from work compared to UTCI-Fiala, differences in WA assessed by the physiological models largely depended on the applied limit criteria. In conclusion, our study showed that the choice of the heat stress assessment metric impacts notably on the estimated WA. Whereas PHS and UTCI-Fiala can account for
Energy Technology Data Exchange (ETDEWEB)
Rao, Bala Bhaskara [Dept. of Mechanical Engineering, SISTAM College, JNTU, Kakinada (India); Raju, V. Ramachandra [Dept. of Mechanical Engineering, JNTU, Kakinada (India); Deepak, B. B V. L. [Dept. of Industrial Design, National Institute of Technology, Rourkela (India)
2017-01-15
Most thermal/chemical industries are equipped with heat exchangers to enhance thermal efficiency. The performance of heat exchangers highly depends on design modifications in the tube side, such as the cross-sectional area, orientation, and baffle cut of the tube. However, these parameters do not exhibit a specific relation to determining the optimum design condition for shell and tube heat exchangers with a maximum heat transfer rate and reduced pressure drops. Accordingly, experimental and numerical simulations are performed for a heat exchanger with varying tube geometries. The heat exchanger considered in this investigation is a single-shell, multiple-pass device. A Generalized regression neural network (GRNN) is applied to generate a relation among the input and output process parameters for the experimental data sets. Then, an Artificial immune system (AIS) is used with GRNN to obtain optimized input parameters. Lastly, results are presented for the developed hybrid GRNN-AIS approach.
Bootstrapping Kernel-Based Semiparametric Estimators
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Jansson, Michael
by accommodating a non-negligible bias. A noteworthy feature of the assumptions under which the result is obtained is that reliance on a commonly employed stochastic equicontinuity condition is avoided. The second main result shows that the bootstrap provides an automatic method of correcting for the bias even...... when it is non-negligible....
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole E.
The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....
Lin, L.; Luo, X.; Qin, F.; Yang, J.
2018-03-01
As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.
Results from ORNL Characterization of Nominal 350 (micro)m NUCO Kernels from the BWXT 59344 batch
International Nuclear Information System (INIS)
Hunn, John D.; Kercher, Andrew K.; Menchhofer, Paul A.; Price, Jeffery R.
2005-01-01
This document is a compilation of characterization data obtained on nominal 350 (micro)m natural enrichment uranium oxide/uranium carbide kernels (NUCO) produced by BWXT for the Advanced Gas Reactor Fuel Development and Qualification Program. These kernels were produced as part of a development effort at BWXT to address issues involving forming and heat treatment and were shipped to ORNL for additional characterization and for coating tests. The kernels were identified as G73N-NU-59344. 250 grams were shipped to ORNL. Size, shape, and microstructural analysis was performed. These kernels were preceded by G73B-NU-69300 and G73B-NU-69301, which were kernels produced and delivered to ORNL earlier in the development phase. Characterization of the kernels from G73B-NU-69300 was summarized in ORNL/CF-04/07 'Results from ORNL Characterization of Nominal 350 (micro)m NUCO Kernels from the BWXT 69300 composite'.
State and parameter estimation of the heat shock response system using Kalman and particle filters.
Liu, Xin; Niranjan, Mahesan
2012-06-01
Traditional models of systems biology describe dynamic biological phenomena as solutions to ordinary differential equations, which, when parameters in them are set to correct values, faithfully mimic observations. Often parameter values are tweaked by hand until desired results are achieved, or computed from biochemical experiments carried out in vitro. Of interest in this article, is the use of probabilistic modelling tools with which parameters and unobserved variables, modelled as hidden states, can be estimated from limited noisy observations of parts of a dynamical system. Here we focus on sequential filtering methods and take a detailed look at the capabilities of three members of this family: (i) extended Kalman filter (EKF), (ii) unscented Kalman filter (UKF) and (iii) the particle filter, in estimating parameters and unobserved states of cellular response to sudden temperature elevation of the bacterium Escherichia coli. While previous literature has studied this system with the EKF, we show that parameter estimation is only possible with this method when the initial guesses are sufficiently close to the true values. The same turns out to be true for the UKF. In this thorough empirical exploration, we show that the non-parametric method of particle filtering is able to reliably estimate parameters and states, converging from initial distributions relatively far away from the underlying true values. Software implementation of the three filters on this problem can be freely downloaded from http://users.ecs.soton.ac.uk/mn/HeatShock
International Nuclear Information System (INIS)
Das, Parichay K.
2012-01-01
Highlights: ► This method for estimating ΔT ad (t) against time in a semi-batch reactor is distinctively pioneer and novel. ► It has established uniquely a direct correspondence between the evolution of ΔT ad (t) in RC and C A (t) in a semi-batch reactor. ► Through a unique reaction scheme, the independent effects of heat of mixing and reaction on ΔT ad (t) has been demonstrated quantitatively. ► This work will help to build a thermally safe corridor of a thermally hazard reaction. ► This manuscript, the author believes will open a new vista for further research in Adiabatic Calorimetry. - Abstract: A novel method for estimating the transient profile of adiabatic rise in temperature has been developed from the heat flow data for exothermic chemical reactions that are conducted in reaction calorimeter (RC). It has also been mathematically demonstrated by the present design that there exists a direct qualitative equivalence between the temporal evolution of the adiabatic temperature rise and the concentration of the limiting reactant for an exothermic chemical reaction, carried out in semi batch mode. The proposed procedure shows that the adiabatic temperature rise will always be less than that of the reaction executed at batch mode thereby affording a thermally safe corridor. Moreover, a unique reaction scheme has been designed to establish the independent heat effect of dissolution and reaction quantitatively. It is hoped that the testimony of the transient adiabatic temperature rise that can be prepared by the proposed method, may provide ample scope for further research.
Kernel and divergence techniques in high energy physics separations
Bouř, Petr; Kůs, Václav; Franc, Jiří
2017-10-01
Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.
Zapata, N.; Martínez-Cob, A.
2001-12-01
This paper reports a study undertaken to evaluate the feasibility of the surface renewal method to accurately estimate long-term evaporation from the playa and margins of an endorreic salty lagoon (Gallocanta lagoon, Spain) under semiarid conditions. High-frequency temperature readings were taken for two time lags ( r) and three measurement heights ( z) in order to get surface renewal sensible heat flux ( HSR) values. These values were compared against eddy covariance sensible heat flux ( HEC) values for a calibration period (25-30 July 2000). Error analysis statistics (index of agreement, IA; root mean square error, RMSE; and systematic mean square error, MSEs) showed that the agreement between HSR and HEC improved as measurement height decreased and time lag increased. Calibration factors α were obtained for all analyzed cases. The best results were obtained for the z=0.9 m ( r=0.75 s) case for which α=1.0 was observed. In this case, uncertainty was about 10% in terms of relative error ( RE). Latent heat flux values were obtained by solving the energy balance equation for both the surface renewal ( LESR) and the eddy covariance ( LEEC) methods, using HSR and HEC, respectively, and measurements of net radiation and soil heat flux. For the calibration period, error analysis statistics for LESR were quite similar to those for HSR, although errors were mostly at random. LESR uncertainty was less than 9%. Calibration factors were applied for a validation data subset (30 July-4 August 2000) for which meteorological conditions were somewhat different (higher temperatures and wind speed and lower solar and net radiation). Error analysis statistics for both HSR and LESR were quite good for all cases showing the goodness of the calibration factors. Nevertheless, the results obtained for the z=0.9 m ( r=0.75 s) case were still the best ones.
Estimating the health benefits from natural gas use in transport and heating in Santiago, Chile.
Mena-Carrasco, Marcelo; Oliva, Estefania; Saide, Pablo; Spak, Scott N; de la Maza, Cristóbal; Osses, Mauricio; Tolvett, Sebastián; Campbell, J Elliott; Tsao, Tsao Es Chi-Chung; Molina, Luisa T
2012-07-01
Chilean law requires the assessment of air pollution control strategies for their costs and benefits. Here we employ an online weather and chemical transport model, WRF-Chem, and a gridded population density map, LANDSCAN, to estimate changes in fine particle pollution exposure, health benefits, and economic valuation for two emission reduction strategies based on increasing the use of compressed natural gas (CNG) in Santiago, Chile. The first scenario, switching to a CNG public transportation system, would reduce urban PM2.5 emissions by 229 t/year. The second scenario would reduce wood burning emissions by 671 t/year, with unique hourly emission reductions distributed from daily heating demand. The CNG bus scenario reduces annual PM2.5 by 0.33 μg/m³ and up to 2 μg/m³ during winter months, while the residential heating scenario reduces annual PM2.5 by 2.07 μg/m³, with peaks exceeding 8 μg/m³ during strong air pollution episodes in winter months. These ambient pollution reductions lead to 36 avoided premature mortalities for the CNG bus scenario, and 229 for the CNG heating scenario. Both policies are shown to be cost-effective ways of reducing air pollution, as they target high-emitting area pollution sources and reduce concentrations over densely populated urban areas as well as less dense areas outside the city limits. Unlike the concentration rollback methods commonly used in public policy analyses, which assume homogeneous reductions across a whole city (including homogeneous population densities), and without accounting for the seasonality of certain emissions, this approach accounts for both seasonality and diurnal emission profiles for both the transportation and residential heating sectors. Copyright © 2012 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Lopez, C.; Koski, J.A.; Razani, A.
2000-01-01
A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively
Directory of Open Access Journals (Sweden)
Jonathon Taylor
2018-05-01
Full Text Available Mortality rates rise during hot weather in England, and projected future increases in heatwave frequency and intensity require the development of heat protection measures such as the adaptation of housing to reduce indoor overheating. We apply a combined building physics and health model to dwellings in the West Midlands, UK, using an English Housing Survey (EHS-derived stock model. Regional temperature exposures, heat-related mortality risk, and space heating energy consumption were estimated for 2030s, 2050s, and 2080s medium emissions climates prior to and following heat mitigating, energy-efficiency, and occupant behaviour adaptations. Risk variation across adaptations, dwellings, and occupant types were assessed. Indoor temperatures were greatest in converted flats, while heat mortality rates were highest in bungalows due to the occupant age profiles. Full energy efficiency retrofit reduced regional domestic space heating energy use by 26% but increased summertime heat mortality 3–4%, while reduced façade absorptance decreased heat mortality 12–15% but increased energy consumption by 4%. External shutters provided the largest reduction in heat mortality (37–43%, while closed windows caused a large increase in risk (29–64%. Ensuring adequate post-retrofit ventilation, targeted installation of shutters, and ensuring operable windows in dwellings with heat-vulnerable occupants may save energy and significantly reduce heat-related mortality.
Heat and power demands in babassu palm oil extraction industry in Brazil
International Nuclear Information System (INIS)
Teixeira, Marcos A.
2005-01-01
The objective of this paper is to analyze the energy use profile of the babassu (Orbignya ssp-Palmae) oil extraction industry in Brazil in order to establish the basis for a cogeneration study of this important part of the Brazilian Northeast region economy, which is still ignored by energetic biomass studies. The work used information from new equipment suppliers that was analyzed against field information from operating units. The data was used to establish a basis for the thermal and mechanical energy consumption for the two main basic unit profiles for the sector: a simple one with just oil extraction and the other, more vertically integrated with other secondary by-products. For the energetic demand taken from the only oil extraction unit profile study, the minimum pressure for the steam process was estimated at 1.4MPa, electric demand at 5.79kW/ton of processed kernel and heat consumption at 2071MJ/ton of processed kernel (829kg steam/ton of processed kernel). For the vertically integrated unit profile, the following values were found: minimum pressure for the steam process 1.4MPa, electric demand 6.22kW/ton of processed kernel and heat consumption 21,503MJ/ton of processed kernel (7600kg steam/ton of processed kernel)
Directory of Open Access Journals (Sweden)
Yongmin Yang
2017-01-01
Full Text Available The partitioning of available energy between sensible heat and latent heat is important for precise water resources planning and management in the context of global climate change. Land surface temperature (LST is a key variable in energy balance process and remotely sensed LST is widely used for estimating surface heat fluxes at regional scale. However, the inequality between LST and aerodynamic surface temperature (Taero poses a great challenge for regional heat fluxes estimation in one-source energy balance models. To address this issue, we proposed a One-Source Model for Land (OSML to estimate regional surface heat fluxes without requirements for empirical extra resistance, roughness parameterization and wind velocity. The proposed OSML employs both conceptual VFC/LST trapezoid model and the electrical analog formula of sensible heat flux (H to analytically estimate the radiometric-convective resistance (rae via a quartic equation. To evaluate the performance of OSML, the model was applied to the Soil Moisture-Atmosphere Coupling Experiment (SMACEX in United States and the Multi-Scale Observation Experiment on Evapotranspiration (MUSOEXE in China, using remotely sensed retrievals as auxiliary data sets at regional scale. Validated against tower-based surface fluxes observations, the root mean square deviation (RMSD of H and latent heat flux (LE from OSML are 34.5 W/m2 and 46.5 W/m2 at SMACEX site and 50.1 W/m2 and 67.0 W/m2 at MUSOEXE site. The performance of OSML is very comparable to other published studies. In addition, the proposed OSML model demonstrates similar skills of predicting surface heat fluxes in comparison to SEBS (Surface Energy Balance System. Since OSML does not require specification of aerodynamic surface characteristics, roughness parameterization and meteorological conditions with high spatial variation such as wind speed, this proposed method shows high potential for routinely acquisition of latent heat flux estimation
Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing
DEFF Research Database (Denmark)
Pinson, Pierre; Madsen, Henrik
. The obtained ensemble forecasts of wind power are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which are recursively estimated in order to maximize the overall skill of obtained...
Kernel versions of some orthogonal transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Model Selection in Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter
Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...
Directory of Open Access Journals (Sweden)
Boris Kauhl
2016-11-01
Full Text Available Abstract Background The provision of general practitioners (GPs in Germany still relies mainly on the ratio of inhabitants to GPs at relatively large scales and barely accounts for an increased prevalence of chronic diseases among the elderly and socially underprivileged populations. Type 2 Diabetes Mellitus (T2DM is one of the major cost-intensive diseases with high rates of potentially preventable complications. Provision of healthcare and access to preventive measures is necessary to reduce the burden of T2DM. However, current studies on the spatial variation of T2DM in Germany are mostly based on survey data, which do not only underestimate the true prevalence of T2DM, but are also only available on large spatial scales. The aim of this study is therefore to analyse the spatial distribution of T2DM at fine geographic scales and to assess location-specific risk factors based on data of the AOK health insurance. Methods To display the spatial heterogeneity of T2DM, a bivariate, adaptive kernel density estimation (KDE was applied. The spatial scan statistic (SaTScan was used to detect areas of high risk. Global and local spatial regression models were then constructed to analyze socio-demographic risk factors of T2DM. Results T2DM is especially concentrated in rural areas surrounding Berlin. The risk factors for T2DM consist of proportions of 65–79 year olds, 80 + year olds, unemployment rate among the 55–65 year olds, proportion of employees covered by mandatory social security insurance, mean income tax, and proportion of non-married couples. However, the strength of the association between T2DM and the examined socio-demographic variables displayed strong regional variations. Conclusion The prevalence of T2DM varies at the very local level. Analyzing point data on T2DM of northeastern Germany’s largest health insurance provider thus allows very detailed, location-specific knowledge about increased medical needs. Risk factors
Simple equation for estimating actual evapotranspiration using heat units for wheat in arid regions
Directory of Open Access Journals (Sweden)
M.A. Salama
2015-07-01
Application of treatment (B resulted in highly significant increase in yield production of Gemmeza10 and Misr2 as compared to treatment (A. Grain yield of different wheat varieties grown under treatment (B could be ranked in the following descending order: Misr2 > Gemmeza10 > Sids12. While under treatment (A it could be arranged in the following descending order: Misr2 > Sids12 > Gemmeza10. On the other hand, the overall means indicated non-significant difference between all wheat verities. The highest values of water and irrigation use efficiency as well as heat use efficiency were obtained with treatment (B. The equation used in the present study is available to estimate ETa under arid climate with drip irrigation system.
Analysis of Advanced Fuel Kernel Technology
International Nuclear Information System (INIS)
Oh, Seung Chul; Jeong, Kyung Chai; Kim, Yeon Ku; Kim, Young Min; Kim, Woong Ki; Lee, Young Woo; Cho, Moon Sung
2010-03-01
The reference fuel for prismatic reactor concepts is based on use of an LEU UCO TRISO fissile particle. This fuel form was selected in the early 1980s for large high-temperature gas-cooled reactor (HTGR) concepts using LEU, and the selection was reconfirmed for modular designs in the mid-1980s. Limited existing irradiation data on LEU UCO TRISO fuel indicate the need for a substantial improvement in performance with regard to in-pile gaseous fission product release. Existing accident testing data on LEU UCO TRISO fuel are extremely limited, but it is generally expected that performance would be similar to that of LEU UO 2 TRISO fuel if performance under irradiation were successfully improved. Initial HTGR fuel technology was based on carbide fuel forms. In the early 1980s, as HTGR technology was transitioning from high-enriched uranium (HEU) fuel to LEU fuel. An initial effort focused on LEU prismatic design for large HTGRs resulted in the selection of UCO kernels for the fissile particles and thorium oxide (ThO 2 ) for the fertile particles. The primary reason for selection of the UCO kernel over UO 2 was reduced CO pressure, allowing higher burnup for equivalent coating thicknesses and reduced potential for kernel migration, an important failure mechanism in earlier fuels. A subsequent assessment in the mid-1980s considering modular HTGR concepts again reached agreement on UCO for the fissile particle for a prismatic design. In the early 1990s, plant cost-reduction studies led to a decision to change the fertile material from thorium to natural uranium, primarily because of a lower long-term decay heat level for the natural uranium fissile particles. Ongoing economic optimization in combination with anticipated capabilities of the UCO particles resulted in peak fissile particle burnup projection of 26% FIMA in steam cycle and gas turbine concepts
Estimation of solar collector area for water heating in buildings of Malaysia
Manoj Kumar, Nallapaneni; Sudhakar, K.; Samykano, M.
2018-04-01
Solar thermal energy (STE) utilization for water heating at various sectorial levels became popular and still growing especially for buildings in the residential area. This paper aims to study and identify the solar collector area needed based on the user requirements in an efficient manner. A step by step mathematical approach is followed to estimate the area in Sq. m. Four different cases each having different hot water temperatures (45°, 50°C, 55°C, and 60°C) delivered by the solar water heating system (SWHS) for typical residential application at Kuala Lumpur City, Malaysia is analysed for the share of hot and cold water mix. As the hot water temperature levels increased the share of cold water mix is increased to satisfy the user requirement temperature, i.e. 40°C. It is also observed that as the share of hot water mix is reduced, the collector area can also be reduced. Following this methodology at the installation stage would help both the user and installers in the effective use of the solar resource.
Integral equations with contrasting kernels
Directory of Open Access Journals (Sweden)
Theodore Burton
2008-01-01
Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.
Kernel learning algorithms for face recognition
Li, Jun-Bao; Pan, Jeng-Shyang
2013-01-01
Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new
Model selection for Gaussian kernel PCA denoising
DEFF Research Database (Denmark)
Jørgensen, Kasper Winther; Hansen, Lars Kai
2012-01-01
We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...
Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...
African Journals Online (AJOL)
Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...
Lahiri, B. B.; Ranoo, Surojit; Philip, John
2017-11-01
Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the
International Nuclear Information System (INIS)
Lahiri, B B; Ranoo, Surojit; Philip, John
2017-01-01
Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ∼25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and
Bayesian Frequency Domain Identification of LTI Systems with OBFs kernels
Darwish, M.A.H.; Lataire, J.P.G.; Tóth, R.
2017-01-01
Regularised Frequency Response Function (FRF) estimation based on Gaussian process regression formulated directly in the frequency-domain has been introduced recently The underlying approach largely depends on the utilised kernel function, which encodes the relevant prior knowledge on the system
Wang, Xiaowei; Li, Huiping; Li, Zhichao
2018-04-01
The interfacial heat transfer coefficient (IHTC) is one of the most important thermal physical parameters which have significant effects on the calculation accuracy of physical fields in the numerical simulation. In this study, the artificial fish swarm algorithm (AFSA) was used to evaluate the IHTC between the heated sample and the quenchant in a one-dimensional heat conduction problem. AFSA is a global optimization method. In order to speed up the convergence speed, a hybrid method which is the combination of AFSA and normal distribution method (ZAFSA) was presented. The IHTC evaluated by ZAFSA were compared with those attained by AFSA and the advanced-retreat method and golden section method. The results show that the reasonable IHTC is obtained by using ZAFSA, the convergence of hybrid method is well. The algorithm based on ZAFSA can not only accelerate the convergence speed, but also reduce the numerical oscillation in the evaluation of IHTC.
DEFF Research Database (Denmark)
Walder, Christian; Henao, Ricardo; Mørup, Morten
We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....
Mixed kernel function support vector regression for global sensitivity analysis
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
International Nuclear Information System (INIS)
Li, Yanhao; Wang, Guangjun; Chen, Hong
2015-01-01
The predictive control theory is utilized for the research of a simultaneous estimation of heat fluxes through the upper, side and lower surface of a steel slab in a walking beam type rolling steel reheating furnace. An inverse algorithm based on dynamic matrix control (DMC) is established. That is, each surface heat flux of a slab is simultaneously estimated through rolling optimization on the basis of temperature measurements in selected points of its interior by utilizing step response function as predictive model of a slab's temperature. The reliability of the DMC results is enhanced without prior assuming specific functions of heat fluxes over a period of future time. The inverse algorithm proposed a respective regularization to effectively improve the stability of the estimated results by considering obvious strength differences between the upper as well as lower and side surface heat fluxes of the slab. - Highlights: • The predictive control theory is adopted. • An inversion scheme based on DMC is established. • Upper, side and lower surface heat fluxes of slab are estimated based DMC. • A respective regularization is proposed to improve the stability of results
International Nuclear Information System (INIS)
Parwani, Ajit K.; Talukdar, Prabal; Subbarao, P.M.V.
2013-01-01
An inverse heat transfer problem is discussed to estimate simultaneously the unknown position and timewise varying strength of a heat source by utilizing differential evolution approach. A two dimensional enclosure with isothermal and black boundaries containing non-scattering, absorbing and emitting gray medium is considered. Both radiation and conduction heat transfer are included. No prior information is used for the functional form of timewise varying strength of heat source. The finite volume method is used to solve the radiative transfer equation and the energy equation. In this work, instead of measured data, some temperature data required in the solution of the inverse problem are taken from the solution of the direct problem. The effect of measurement errors on the accuracy of estimation is examined by introducing errors in the temperature data of the direct problem. The prediction of source strength and its position by the differential evolution (DE) algorithm is found to be quite reasonable. -- Highlights: •Simultaneous estimation of strength and position of a heat source. •A conducting and radiatively participating medium is considered. •Implementation of differential evolution algorithm for such kind of problems. •Profiles with discontinuities can be estimated accurately. •No limitation in the determination of source strength at the final time
Model selection in kernel ridge regression
DEFF Research Database (Denmark)
Exterkate, Peter
2013-01-01
Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...
International Nuclear Information System (INIS)
Rana, Rajib; Kusy, Brano; Wall, Josh; Hu, Wen
2015-01-01
Reductions in HVAC (heating, ventilation and air conditioning) energy consumption can be achieved by limiting heating in the winter or cooling in the summer. However, the resulting low thermal comfort of building occupants may lead to an override of the HVAC control, which revokes its original purpose. This has led to an increased interest in modeling and real-time tracking of location, activity, and thermal comfort of building occupants for HVAC energy management. While thermal comfort is well understood, it is difficult to measure in real-time environments where user context changes dynamically. Encouragingly, plethora of sensors available on smartphone unleashes the opportunity to measure user contexts in real-time. An important contextual information for measuring thermal comfort is Metabolism rate, which changes based on current physical activities. To measure physical activity, we develop an activity classifier, which achieves 10% higher accuracy compared to Support Vector Machine and k-Nearest Neighbor. Office occupancy is another contextual information for energy-efficient HVAC control. Most of the phone based occupancy estimation techniques will fail to determine occupancy when phones are left at desk while sitting or attending meetings. We propose a novel sensor fusion method to detect if a user is near the phone, which achieves more than 90% accuracy. Determining activity and occupancy our proposed algorithms can help maintaining thermal comfort while reducing HVAC energy consumptions. - Highlights: • We propose activity and occupancy detection for efficient HVAC control. • Activity classifier achieves 10% higher accuracy than SVM and kNN. • For occupancy detection we propose a novel sensor fusion method. • Using Weighted Majority Voting we fuse microphone and accelerometer data on phone. • We achieve more than 90% accuracy in detecting occupancy.
Multiple Kernel Learning with Data Augmentation
2016-11-22
JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to
A kernel version of multivariate alteration detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2013-01-01
Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....
Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.
2012-01-01
The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an
International Nuclear Information System (INIS)
Burnett, R.C.; Hankart, L.J.; Horsley, G.W.
1965-05-01
The development of methods of producing spheroidal sintered porous kernels of hyperstoichiometric thorium/uranium dicarbide solid solution from thorium/uranium monocarbide/carbon and thoria/urania/carbon powder mixes is described. The work has involved study of (i) Methods of preparing green kernels from UC/Th/C powder mixes using the rotary sieve technique. (ii) Methods of producing green kernels from UO2/Th02/C powder mixes using the planetary mill technique. (iii) The conversion by appropriate heat treatment of green kernels produced by both routes to sintered porous kernels of thorium/uranium carbide. (iv) The efficiency of the processes. (author)
Complex use of cottonseed kernels
Energy Technology Data Exchange (ETDEWEB)
Glushenkova, A I
1977-01-01
A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.
GRIM : Leveraging GPUs for Kernel integrity monitoring
Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris
2016-01-01
Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious
Paramecium: An Extensible Object-Based Kernel
van Doorn, L.; Homburg, P.; Tanenbaum, A.S.
1995-01-01
In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection
Local Observed-Score Kernel Equating
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Veto-Consensus Multiple Kernel Learning
Zhou, Y.; Hu, N.; Spanos, C.J.
2016-01-01
We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The
International Nuclear Information System (INIS)
Ouagued, Malika; Khellaf, Abdallah; Loukarfi, Larbi
2013-01-01
Highlights: • Estimation of direct solar radiations for different tracking systems at six typical locations in Algeria. • PTC thermal model uses energy balances from the HTF to the atmosphere. • The model depends on the collector type, nature of HTF, optical properties, and ambient conditions. • Estimation of temperature, heat gain and energy cost of thermal oils used in the model. • Comparison between monthly mean heat gain of the various thermal oils for six Algerian locations. - Abstract: Algeria is blessed with a very important renewable, and more particularly solar, energy potential. This potential opens for Algeria reel opportunities to cope with the increasing energy demand and the growing environmental problems link to the use of fossil fuel. In order to develop and to promote concrete actions in the areas of renewable energy and energy efficiency, Algeria has introduced a national daring program for the period 2011–2030. In this program, solar energy, and more particularly solar thermal energy plays an important role. In this paper, the potential of direct solar irradiance in Algeria and the performance of solar parabolic trough collector (PTC) are estimated under the climate conditions of the country. These two factors are treated as they play an important role in the design of solar thermal plant. In order to determine the most promising solar sites in Algeria, monthly mean daily direct solar radiation have been estimated and compared for different locations corresponding to different climatic region. Different tilted and tracking collectors are considered so as to determine the most efficient system for the PTC. In order to evaluate the performance of a tracking solar parabolic trough collector, a heat transfer model is developed. The receiver, heat collector element (HCE), is divided into several segments and heat balance is applied in each segment over a section of the solar receiver. Different oils are considered to determine the thermal
Initial estimates of anthropogenic heat emissions for the City of Durban
CSIR Research Space (South Africa)
Padayachi, Yerdashin R
2018-03-01
Full Text Available Cities in South Africa are key hotspots for regional emissions and climate change impacts including the urban heat island effect. Anthropogenic Heat (AH) emission is an important driver of warming in urban areas. The implementation of mitigation...
Optimal kernel shape and bandwidth for atomistic support of continuum stress
International Nuclear Information System (INIS)
Ulz, Manfred H; Moran, Sean J
2013-01-01
The treatment of atomistic scale interactions via molecular dynamics simulations has recently found favour for multiscale modelling within engineering. The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. This kernel function derives the stress at a continuum point by averaging the contribution from atoms within a region surrounding the continuum point. This averaging volume, and therefore the associated stress at a continuum point, is highly dependent on the bandwidth and shape of the kernel. In this paper we propose an effective and entirely data-driven strategy for simultaneously computing the optimal shape and bandwidth for the kernel. We thoroughly evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point. (paper)
Estimation of the economical and ecological efficiency of the solar heat supply in Russia
International Nuclear Information System (INIS)
Marchenko, O.V.; Solomin, S.V.
2001-01-01
One carried out numerical study of application efficiency of solar heat supply systems in the climatic conditions of Russia with regard to their economical competitiveness with organic fuel heat conventional sources and role in reduction of greenhouse gas releases. One defined the regions where (under certain conditions) application of solar energy to generate low-potential heat may be reasonable [ru
Luciani, S.; LeNiliot, C.
2008-11-01
Two-phase and boiling flow instabilities are complex, due to phase change and the existence of several interfaces. To fully understand the high heat transfer potential of boiling flows in microscale's geometry, it is vital to quantify these transfers. To perform this task, an experimental device has been designed to observe flow patterns. Analysis is made up by using an inverse method which allows us to estimate the local heat transfers while boiling occurs inside a microchannel. In our configuration, the direct measurement would impair the accuracy of the searched heat transfer coefficient because thermocouples implanted on the surface minichannels would disturb the established flow. In this communication, we are solving a 3D IHCP which consists in estimating using experimental data measurements the surface temperature and the surface heat flux in a minichannel during convective boiling under several gravity levels (g, 1g, 1.8g). The considered IHCP is formulated as a mathematical optimization problem and solved using the boundary element method (BEM).
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2006-01-01
Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2004-01-01
Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar
Directory of Open Access Journals (Sweden)
Senyue Zhang
2016-01-01
Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.
Directory of Open Access Journals (Sweden)
Milčić Dragan S.
2012-01-01
Full Text Available Friction stir welding is a solid-state welding technique that utilizes thermomechanical influence of the rotating welding tool on parent material resulting in a monolith joint - weld. On the contact of welding tool and parent material, significant stirring and deformation of parent material appears, and during this process, mechanical energy is partially transformed into heat. Generated heat affects the temperature of the welding tool and parent material, thus the proposed analytical model for the estimation of the amount of generated heat can be verified by temperature: analytically determined heat is used for numerical estimation of the temperature of parent material and this temperature is compared to the experimentally determined temperature. Numerical solution is estimated using the finite difference method - explicit scheme with adaptive grid, considering influence of temperature on material's conductivity, contact conditions between welding tool and parent material, material flow around welding tool, etc. The analytical model shows that 60-100% of mechanical power given to the welding tool is transformed into heat, while the comparison of results shows the maximal relative difference between the analytical and experimental temperature of about 10%.
Delimiting areas of endemism through kernel interpolation.
Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Delimiting areas of endemism through kernel interpolation.
Directory of Open Access Journals (Sweden)
Ubirajara Oliveira
Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Watts, C.J.; Chehbouni, A.; Rodriguez, J.C.; Kerr, Y.H.; Hartogensis, O.K.; Bruin, de H.A.R.
2000-01-01
The problems associated with the validation of satellite-derived estimates of the surface fluxes are discussed and the possibility of using the large aperture scintillometer is investigated. Simple models are described to derive surface temperature and sensible heat flux from the advanced very high
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America
Digestibility of solvent-treated Jatropha curcas kernel by broiler chickens in Senegal.
Nesseim, Thierry Daniel Tamsir; Dieng, Abdoulaye; Mergeai, Guy; Ndiaye, Saliou; Hornick, Jean-Luc
2015-12-01
Jatropha curcas is a drought-resistant shrub belonging to the Euphorbiaceae family. The kernel contains approximately 60 % lipid in dry matter, and the meal obtained after oil extraction could be an exceptional source of protein for family poultry farming, in the absence of curcin and, especially, some diterpene derivatives phorbol esters that are partially lipophilic. The nutrient digestibility of J. curcas kernel meal (JKM), obtained after partial physicochemical deoiling was thus evaluated in broiler chickens. Twenty broiler chickens, 6 weeks old, were maintained in individual metabolic cages and divided into four groups of five animals, according to a 4 × 4 Latin square design where deoiled JKM was incorporated into grinded corn at 0, 4, 8, and 12 % levels (diets 0, 4, 8, and 12 J), allowing measurement of nutrient digestibility by the differential method. The dry matter (DM) and organic matter (OM) digestibility of diets was affected to a low extent by JKM (85 and 86 % in 0 J and 81 % in 12 J, respectively) in such a way that DM and OM digestibility of JKM was estimated to be close to 50 %. The ether extract (EE) digestibility of JKM remained high, at about 90 %, while crude protein (CP) and crude fiber (CF) digestibility were largely impacted by JKM, with values closed to 40 % at the highest levels of incorporation. J. curcas kernel presents various nutrient digestibilities but has adverse effects on CP and CF digestibility of the diet. The effects of an additional heat or biological treatment on JKM remain to be assessed.
DEFF Research Database (Denmark)
Kærn, Martin Ryhl; Modi, Anish; Jensen, Jonas Kjær
2015-01-01
Transport properties of fluids are indispensable for heat exchanger design. The methods for estimating the transport properties of ammonia–water mixtures are not well established in the literature. The few existent methods are developed from none or limited, sometimes inconsistent experimental...... of ammonia–water mixtures. Firstly, the different methods are introduced and compared at various temperatures and pressures. Secondly, their individual influence on the required heat exchanger size (surface area) is investigated. For this purpose, two case studies related to the use of the Kalina cycle...... the interpolative methods in contrast to the corresponding state methods. Nevertheless, all possible mixture transport property combinations used herein resulted in a heat exchanger size within 4.3 % difference for the flue-gas heat recovery boiler, and within 12.3 % difference for the oil-based boiler....
Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5
Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.
2018-02-01
Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6" target="_blank">https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels" target="_blank">https://github.com/apendergrass/cam5-kernels.
PERI - auto-tuning memory-intensive kernels for multicore
International Nuclear Information System (INIS)
Williams, S; Carter, J; Oliker, L; Shalf, J; Yelick, K; Bailey, D; Datta, K
2008-01-01
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4x improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications
PERI - Auto-tuning Memory Intensive Kernels for Multicore
Energy Technology Data Exchange (ETDEWEB)
Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
Credit scoring analysis using kernel discriminant
Widiharih, T.; Mukid, M. A.; Mustafid
2018-05-01
Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.
Testing Infrastructure for Operating System Kernel Development
DEFF Research Database (Denmark)
Walter, Maxwell; Karlsson, Sven
2014-01-01
Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....
Kernel parameter dependence in spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....
A multi-method and multi-scale approach for estimating city-wide anthropogenic heat fluxes
Chow, Winston T. L.; Salamanca, Francisco; Georgescu, Matei; Mahalov, Alex; Milne, Jeffrey M.; Ruddell, Benjamin L.
2014-12-01
A multi-method approach estimating summer waste heat emissions from anthropogenic activities (QF) was applied for a major subtropical city (Phoenix, AZ). These included detailed, quality-controlled inventories of city-wide population density and traffic counts to estimate waste heat emissions from population and vehicular sources respectively, and also included waste heat simulations derived from urban electrical consumption generated by a coupled building energy - regional climate model (WRF-BEM + BEP). These component QF data were subsequently summed and mapped through Geographic Information Systems techniques to enable analysis over local (i.e. census-tract) and regional (i.e. metropolitan area) scales. Through this approach, local mean daily QF estimates compared reasonably versus (1.) observed daily surface energy balance residuals from an eddy covariance tower sited within a residential area and (2.) estimates from inventory methods employed in a prior study, with improved sensitivity to temperature and precipitation variations. Regional analysis indicates substantial variations in both mean and maximum daily QF, which varied with urban land use type. Average regional daily QF was ∼13 W m-2 for the summer period. Temporal analyses also indicated notable differences using this approach with previous estimates of QF in Phoenix over different land uses, with much larger peak fluxes averaging ∼50 W m-2 occurring in commercial or industrial areas during late summer afternoons. The spatio-temporal analysis of QF also suggests that it may influence the form and intensity of the Phoenix urban heat island, specifically through additional early evening heat input, and by modifying the urban boundary layer structure through increased turbulence.
Migration of the ThO2 kernels under the influence of a temperature gradient
International Nuclear Information System (INIS)
Smith, C.L.
1977-01-01
Biso-coated ThO 2 fertile fuel kernels will migrate up the thermal gradients imposed across coated particles during high-temperature gas-cooled reactor (HTGR) operation. Thorium dioxide kernel migration has been studied as a function of temperature (1290 to 1705 0 C) (1563 to 1978 K) and ThO 2 kernel burnup (0.9 to 5.8 percent FIMA) in out-of-pile postirradiation thermal gradient heating experiments. The studies were conducted to obtain descriptions of migration rates that will be used in core design studies to evaluate the impact of ThO 2 migration on fertile fuel performance in an operating HTGR and to define characteristics needed by any comprehensive model describing ThO 2 kernel migration. The kinetics data generated in these postirradiation studies are consistent with in-pile data collected by investigators at Oak Ridge National Laboratory, which supports use of the more precise postirradiation heating results in HTGR core design studies. Observations of intergranular carbon deposits on the cool side of migrating kernels support the assumption that the kinetics of kernel migration are controlled by solid-state diffusion within irradiated ThO 2 kernels. The migration is characterized by a period of no migration (incubation period), followed by migration at the equilibrium rate for ThO 2 . The incubation period decreases with increasing temperature and kernel burnup. The improved understanding of the kinetics of ThO 2 kernel migration provided by this work will contribute to an optimization of HTGR core design and an increased confidence in fuel performance predictions
Migration of ThO2 kernels under the influence of a temperature gradient
International Nuclear Information System (INIS)
Smith, C.L.
1976-11-01
BISO coated ThO 2 fertile fuel kernels will migrate up the thermal gradients imposed across coated particles during HTGR operation. Thorium dioxide kernel migration has been studied as a function of temperature (1300 to 1700 0 C) and ThO 2 kernel burnup (0.9 to 5.8 percent FIMA) in out-of-pile, postirradiation thermal gradient heating experiments. The studies were conducted to obtain descriptions of migration rates that will be used in core design studies to evaluate the impact of ThO 2 migration on fertile fuel performance in an operating HTGR and to define characteristics needed by any comprehensive model describing ThO 2 kernel migration. The kinetics data generated in these postirradiation studies are consistent with in-pile data collected by investigators at Oak Ridge National Laboratory, which supports use of the more precise postirradiation heating results in HTGR core design studies. Observations of intergranular carbon deposits on the cool side of migrating kernels support the assumption that the kinetics of kernel migration are controlled by solid state diffusion within irradiated ThO 2 kernels. The migration is characterized by a period of no migration (incubation period) followed by migration at the equilibrium rate for ThO 2 . The incubation period decreases with increasing temperature and kernel burnup. The improved understanding of the kinetics of ThO 2 kernel migration provided by this work will contribute to an optimization of HTGR core design and an increased confidence in fuel performance predictions
Heating of field-reversed plasma rings estimated with two scaling models
Energy Technology Data Exchange (ETDEWEB)
Shearer, J.W.
1978-05-18
Scaling calculations are presented of the one temperature heating of a field-reversed plasma ring. Two sharp-boundary models of the ring are considered: the long thin approximation and a pinch model. Isobaric, adiabatic, and isovolumetric cases are considered, corresponding to various ways of heating the plasma in a real experiment by using neutral beams, or by raising the magnetic field. It is found that the shape of the plasma changes markedly with heating. The least sensitive shape change (as a function of temperature) is found for the isovolumetric heating case, which can be achieved by combining neutral beam heating with compression. The complications introduced by this heating problem suggest that it is desirable, if possible, to create a field reversed ring which is already quite hot, rather than cold.
An examination of the estimation method for the specific heat of TRU dioxides: evaluation with PuO2
International Nuclear Information System (INIS)
Serizawa, H.; Arai, Y.
2000-01-01
This work set out to study the estimation method of the specific heat, C p , for the dioxides of the transuranic elements. C p was evaluated as a sum of three terms, contributions of phonon vibration, C ph , dilation, C d , and Schottky specific heat, C s , C ph and C d were calculated using the Debye temperature and Grueneisen constant obtained by high-temperature X-ray diffractometry. The method was applied to PuO 2 . The estimated C p was in good accordance with the reported one measured using a calorimeter. The error in the estimation was small compared to that which arises from using the conventional method based on C p (298) and the melting temperature. (orig.)
Estimating the workpiece-backingplate heat transfer coefficient in friction stirwelding
DEFF Research Database (Denmark)
Larsen, Anders; Stolpe, Mathias; Hattel, Jesper Henri
2012-01-01
Purpose - The purpose of this paper is to determine the magnitude and spatial distribution of the heat transfer coefficient between the workpiece and the backingplate in a friction stir welding process using inverse modelling. Design/methodology/approach - The magnitude and distribution of the heat...... in an inverse modeling approach to determine the heat transfer coefficient in friction stir welding. © Emerald Group Publishing Limited....
Innovative research reactor core designed. Estimation and analysis of gamma heating distribution
International Nuclear Information System (INIS)
Setiyanto
2014-01-01
The Gamma heating value is an important factor needed for safety analysis of each experiments that will be realized on research reactor core. Gamma heat is internal heat source occurs in each irradiation facilities or any material irradiated in reactor core. This value should be determined correctly because of the safety related problems. The gamma heating value is in general depend on. reactor core characteristics, different one and other, and then each new reactor design should be completed by gamma heating data. The Innovative Research Reactor is one of the new reactor design that should be completed with any safety data, including the gamma heating value. For this reasons, calculation and analysis of gamma heating in the hole of reactor core and irradiation facilities in reflector had been done by using of modified and validated Gamset computer code. The result shown that gamma heating value of 11.75 W/g is the highest value at the center of reactor core, higher than gamma heating value of RSG-GAS. However, placement of all irradiation facilities in reflector show that safety characteristics for irradiation facilities of innovative research reactor more better than RSG-GAS reactor. Regarding the results obtained, and based on placement of irradiation facilities in reflector, can be concluded that innovative research reactor more safe for any irradiation used. (author)
Ershadi, Ali
2013-05-01
The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.
Validation of Born Traveltime Kernels
Baig, A. M.; Dahlen, F. A.; Hung, S.
2001-12-01
Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.
Energy Technology Data Exchange (ETDEWEB)
Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)
2009-08-15
The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)
Energy Technology Data Exchange (ETDEWEB)
Coulter, R. L.; Gao, W.; Lesht, B. M.
2000-04-04
Measurements at the central facility of the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) are intended to verify, improve, and develop parameterizations in radiative flux models that are subsequently used in General Circulation Models (GCMs). The reliability of this approach depends upon the representativeness of the local measurements at the central facility for the site as a whole or on how these measurements can be interpreted so as to accurately represent increasingly large scales. The variation of surface energy budget terms over the SGP CART site is extremely large. Surface layer measurements of the sensible heat flux (H) often vary by a factor of 2 or more at the CART site (Coulter et al. 1996). The Planetary Boundary Layer (PBL) effectively integrates the local inputs across large scales; because the mixed layer height (h) is principally driven by H, it can, in principal, be used for estimates of surface heat flux over scales on the order of tens of kilometers. By combining measurements of h from radiosondes or radar wind profiles with a one-dimensional model of mixed layer height, they are investigating the ability of diagnosing large-scale heat fluxes. The authors have developed a procedure using the model described by Boers et al. (1984) to investigate the effect of changes in surface sensible heat flux on the mixed layer height. The objective of the study is to invert the sense of the model.
RKRD: Runtime Kernel Rootkit Detection
Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.
In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.
Kernel Bayesian ART and ARTMAP.
Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan
2018-02-01
Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Heidrich, P; Wolfersdorf, J v; Schmidt, S; Schnieder, M
2008-01-01
This paper describes a non-invasive, non-destructive, transient inverse measurement technique that allows one to determine internal heat transfer coefficients and rib positions of real gas turbine blades from outer surface temperature measurements after a sudden flow heating. The determination of internal heat transfer coefficients is important during the design process to adjust local heat transfer to spatial thermal load. The detection of rib positions is important during production to fulfill design and quality requirements. For the analysis the one-dimensional transient heat transfer problem inside of the turbine blade's wall was solved. This solution was combined with the Levenberg-Marquardt method to estimate the unknown boundary condition by an inverse technique. The method was tested with artificial data to determine uncertainties with positive results. Then experimental testing with a reference model was carried out. Based on the results, it is concluded that the presented inverse technique could be used to determine internal heat transfer coefficients and to detect rib positions of real turbine blades.
Kinoshita, M.; Kawamura, K.; Lin, W.
2015-12-01
During the Nankai Trough Seismogenic Zone Experiments (NanTroSEIZE) of the Integrated Ocean Drilling Program (IODP), the advanced piston corer temperature (APC-T) tool was used to determine in situ formation temperatures while piston coring down to ~200 m below sea floor. When the corer is fired into the formation, temperature around the shoe abruptly increases due to the frictional heating. The temperature rise due to the frictional heat at the time of penetration is 10 K or larger. We found that the frictional temperature rise (=maximum temperature) increases with increasing depth, and that its intersection at the seafloor seems non-zero. Frictional heat energy is proportional to the maximum temperature rise, which is confirmed by a FEM numerical simulation of 2D cylindrical system. Here we use the result of numerical simulation to convert the observed temperature rise into the frictional heat energy. The frictional heat energy is represented as the product of the shooting length D and the shear stress (τ) between the pipe and the sediment. Assuming a coulomb slip regime, the shear stress is shows as: τ= τ0 + μ*(Sv-Pp), where τ0 is the cohesive stress, μ the dynamic frictional coefficient between the pipe and the sediment, Sv the normal stress at the pipe, and Pp the pore pressure. This can explain the non-zero intersection as well as depth-dependent increase for the frictional heating observed in the APC-T data. Assuming a hydrostatic state and by using the downhole bulk density data, we estimated the friction coefficient for each APC-T measurement. For comparison, we used the vane-shear strength measured on core samples to estimate the friction coefficients. The frictional coefficients μ were estimated as ranging 0.01 - 0.06, anomalously lower than expected for shallow marine sediments. They were lower than those estimated from vane-shear data, which range 0.05 to 0.2. Still, both estimates exhibit a significant increase in the friction coefficient at
International Nuclear Information System (INIS)
Jia Yaofeng; Huang Chunchang; Pang Jiangli; Lu Xinwei; Zhang Xu
2008-01-01
The thermal treatment in the equivalent-dose estimation often is carried in the OSL dating, and pre-heat is a main thermal treatment. Due to which will originate the problems of thermal transfer and thermal activation, the thermal treatment and the setup of their conditions are key problems influencing the accuracy of OSL dating. The paper combined the temperature of pre-heat and cut-heat used in the routine measurement of IRSL and Post-IR OSL, and then estimated the equivalent-dose of several loess samples. The estimated result presents that the equivalent-dose depends on the heat temperature, especially depends on the cut-heat temperature, which is to say that the equivalent-dose increases with the cut-heat temperature; a plateau of equivalent-dose appears when using the 200-240 degree C cut-heat in the range of 200-300 degree C pre-heat, and the equivalent-doses estimated by IRSL and Post-IR OSL respectively are close to each other, which resulted from the similar sensitivity change direction of optical stimulated signals and its smaller change range in the measurement cycles using the combined temperature of pre- heat and cut-heat, and the incomplete calibration of sensitivity change of optical stimulated signals in the whole measurement cycles caused the variation of estimated equivalent-dose corresponding to the cut-heat temperature. (authors)
Directory of Open Access Journals (Sweden)
S. N. Osipov
2016-01-01
the period from 2006 to 2013, by virtue of the heat-supply schemes optimization and modernizing the heating systems using valuable (200–300 $US per 1 m though hugely effective preliminary coated pipes, the economy reached 2,7 MIO tons of fuel equivalent. Heat-energy general losses in municipal services of Belarus in March 2014 amounted up 17 %, whilst in 2001 they were at the level of 26 % and in 1990 – more than 30 %. With a glance to multi-staging and multifactorial nature (electricity, heat and water supply of the residential sector energy saving, the reasonable estimate of the residential buildings sustenance energy efficiency should be performed in tons of fuel equivalent in a unit of time.
Estimation of work capacity of welded mounting joints of pipelines of heat resisting steel
International Nuclear Information System (INIS)
Gorynin, I.V.; Ignatov, V.A.; Timofeev, B.T.; Blyumin, A.A.
1982-01-01
The analysis of a work capacity of circular welds made for the Dsub(y)850 pipeline connection with high pressure vessels of heat resisting steel of the 15Kh1NMFA type has been carried out on the base of test results with small samples and real units. Welds were performed using the manual electric arc welding without the following heat treatment. It has been shown that residual stresses in such welds do not produce an essential effect on the resistance of weld metal and heat affected zone on the formation and developments of cracks
Davis, Robert E; Hondula, David M; Patel, Anjali P
2016-06-01
Extreme heat is a leading weather-related cause of mortality in the United States, but little guidance is available regarding how temperature variable selection impacts heat-mortality relationships. We examined how the strength of the relationship between daily heat-related mortality and temperature varies as a function of temperature observation time, lag, and calculation method. Long time series of daily mortality counts and hourly temperature for seven U.S. cities with different climates were examined using a generalized additive model. The temperature effect was modeled separately for each hour of the day (with up to 3-day lags) along with different methods of calculating daily maximum, minimum, and mean temperature. We estimated the temperature effect on mortality for each variable by comparing the 99th versus 85th temperature percentiles, as determined from the annual time series. In three northern cities (Boston, MA; Philadelphia, PA; and Seattle, WA) that appeared to have the greatest sensitivity to heat, hourly estimates were consistent with a diurnal pattern in the heat-mortality response, with strongest associations for afternoon or maximum temperature at lag 0 (day of death) or afternoon and evening of lag 1 (day before death). In warmer, southern cities, stronger associations were found with morning temperatures, but overall the relationships were weaker. The strongest temperature-mortality relationships were associated with maximum temperature, although mean temperature results were comparable. There were systematic and substantial differences in the association between temperature and mortality based on the time and type of temperature observation. Because the strongest hourly temperature-mortality relationships were not always found at times typically associated with daily maximum temperatures, temperature variables should be selected independently for each study location. In general, heat-mortality was more closely coupled to afternoon and maximum
Characterisation and final disposal behaviour of theoria-based fuel kernels in aqueous phases
International Nuclear Information System (INIS)
Titov, M.
2005-08-01
Two high-temperature reactors (AVR and THTR) operated in Germany have produced about 1 million spent fuel elements. The nuclear fuel in these reactors consists mainly of thorium-uranium mixed oxides, but also pure uranium dioxide and carbide fuels were tested. One of the possible solutions of utilising spent HTR fuel is the direct disposal in deep geological formations. Under such circumstances, the properties of fuel kernels, and especially their leaching behaviour in aqueous phases, have to be investigated for safety assessments of the final repository. In the present work, unirradiated ThO 2 , (Th 0.906 ,U 0.094 )O 2 , (Th 0.834 ,U 0.166 )O 2 and UO 2 fuel kernels were investigated. The composition, crystal structure and surface of the kernels were investigated by traditional methods. Furthermore, a new method was developed for testing the mechanical properties of ceramic kernels. The method was successfully used for the examination of mechanical properties of oxide kernels and for monitoring their evolution during contact with aqueous phases. The leaching behaviour of thoria-based oxide kernels and powders was investigated in repository-relevant salt solutions, as well as in artificial leachates. The influence of different experimental parameters on the kernel leaching stability was investigated. It was shown that thoria-based fuel kernels possess high chemical stability and are indifferent to presence of oxidative and radiolytic species in solution. The dissolution rate of thoria-based materials is typically several orders of magnitude lower than of conventional UO 2 fuel kernels. The life time of a single intact (Th,U)O 2 kernel under aggressive conditions of salt repository was estimated as about hundred thousand years. The importance of grain boundary quality on the leaching stability was demonstrated. Numerical Monte Carlo simulations were performed in order to explain the results of leaching experiments. (orig.)
Theory of reproducing kernels and applications
Saitoh, Saburou
2016-01-01
This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...
Crone, T. J.; Kinsey, J. C.; Mittelstaedt, E. L.
2017-12-01
Hydrothermal venting at mid-ocean ridges influences ocean chemistry, the thermal and chemical structure of the oceanic crust, and the evolution of unique and diverse autolithotrophically-supported ecosystems. Axially-hosted hydrothermal systems are responsible for 20-25% of the total heat flux out of Earth's interior, and likely play a large role in local as well as global biogeochemical cycles. Despite the importance of these systems, only a few studies have attempted to constrain the volume and heat flux of an entire hydrothermal vent field. In July of 2014 we used the Sentry autonomous underwater vehicle (AUV) to survey the water column over the ASHES hydrothermal vent field which is located within the caldera of Axial Seamount, an active submarine volcano located on the Juan de Fuca Ridge. To estimate the total heat and mass flux from this vent field, we equipped Sentry with a Nortek acoustic Doppler velocimeter (ADV), an inertial measurement unit (IMU), two acoustic Doppler current profilers (ADCPs), and two SBE3 temperature probes, allowing us to obtain precise measurements of fluid temperature and water velocity. The survey was designed using a control volume approach in which Sentry was pre-programmed to survey a 150-m-square centered over the vent field flying a grid pattern with 5-m track line spacing followed by a survey of the perimeter. This pattern was repeated multiple times during several 10-h dives at different altitudes, including 10, 20, 40, and 60 m above the seafloor, and during one 40-h survey at an altitude of 10 m. During the 40-h survey, the pattern was repeated nine times allowing us to obtain observations over several tidal cycles. Water velocity data obtained with Sentry were corrected for platform motion and then combined with the temperature measurements to estimate heat flux. The analysis of these data will likely provide the most accurate and highest resolution heat and mass flux estimates at a seafloor hydrothermal field to date.
ESTIMATION OF WORKING CONDITIONS OF FOUNDRY WORKERS BY INFRARED (HEAT RADIATION
Directory of Open Access Journals (Sweden)
A. M. Lazarenkov
2010-01-01
Full Text Available The description of infrared radiations, their influence on human organism is given. The results of investigation of infrared (heat radiation intensity on the workers in foundries are given.
Estimation of peak heat flux onto the targets for CFETR with extended divertor leg
International Nuclear Information System (INIS)
Zhang, Chuanjia; Chen, Bin; Xing, Zhe; Wu, Haosheng; Mao, Shifeng; Luo, Zhengping; Peng, Xuebing; Ye, Minyou
2016-01-01
Highlights: • A hypothetical geometry is assumed to extend the outer divertor leg in CFETR. • Density scan SOLPS simulation is done to study the peak heat flux onto target. • Attached–detached regime transition in out divertor occurs at lower puffing rate. • Unexpected delay of attached–detached regime transition occurs in inner divertor. - Abstract: China Fusion Engineering Test Reactor (CFETR) is now in conceptual design phase. CFETR is proposed as a good complement to ITER for demonstrating of fusion energy. Divertor is a crucial component which faces the plasmas and handles huge heat power for CFETR and future fusion reactor. To explore an effective way for heat exhaust, various methods to reduce the heat flux to divertor target should be considered for CFETR. In this work, the effect of extended out divertor leg on the peak heat flux is studied. The magnetic configuration of the long leg divertor is obtained by EFIT and Tokamak Simulation Code (TSC), while a hypothetical geometry is assumed to extend the out divertor leg as long as possible inside vacuum vessel. A SOLPS simulation is performed to study peak heat flux of the long leg divertor for CFETR. D 2 gas puffing is used and increasing of the puffing rate means increase of plasma density. Both peak heat flux onto inner and outer targets are below 10 MW/m 2 is achieved. A comparison between the peak heat flux between long leg and conventional divertor shows that an attached–detached regime transition of out divertor occurs at lower gas puffing gas puffing rate for long leg divertor. While for the inner divertor, even the configuration is almost the same, the situation is opposite.
Estimation of peak heat flux onto the targets for CFETR with extended divertor leg
Energy Technology Data Exchange (ETDEWEB)
Zhang, Chuanjia; Chen, Bin [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Xing, Zhe [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Wu, Haosheng [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Mao, Shifeng, E-mail: sfmao@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Luo, Zhengping; Peng, Xuebing [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Ye, Minyou [School of Nuclear Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui 230031 (China)
2016-11-01
Highlights: • A hypothetical geometry is assumed to extend the outer divertor leg in CFETR. • Density scan SOLPS simulation is done to study the peak heat flux onto target. • Attached–detached regime transition in out divertor occurs at lower puffing rate. • Unexpected delay of attached–detached regime transition occurs in inner divertor. - Abstract: China Fusion Engineering Test Reactor (CFETR) is now in conceptual design phase. CFETR is proposed as a good complement to ITER for demonstrating of fusion energy. Divertor is a crucial component which faces the plasmas and handles huge heat power for CFETR and future fusion reactor. To explore an effective way for heat exhaust, various methods to reduce the heat flux to divertor target should be considered for CFETR. In this work, the effect of extended out divertor leg on the peak heat flux is studied. The magnetic configuration of the long leg divertor is obtained by EFIT and Tokamak Simulation Code (TSC), while a hypothetical geometry is assumed to extend the out divertor leg as long as possible inside vacuum vessel. A SOLPS simulation is performed to study peak heat flux of the long leg divertor for CFETR. D{sub 2} gas puffing is used and increasing of the puffing rate means increase of plasma density. Both peak heat flux onto inner and outer targets are below 10 MW/m{sup 2} is achieved. A comparison between the peak heat flux between long leg and conventional divertor shows that an attached–detached regime transition of out divertor occurs at lower gas puffing gas puffing rate for long leg divertor. While for the inner divertor, even the configuration is almost the same, the situation is opposite.
Estimating Summer Ocean Heating in the Arctic Ice Pack Using High-Resolution Satellite Imagery
2014-09-01
8 D. THE BEAUFORT SEA ICE MARGINAL ICE ZONE ...............................9 1. Sea Ice - Albedo Feedback...seasonal evolution of sea ice albedo for MYI (blue) and FYI (red). Plot (c) is the daily solar heat input. Plot (d) is the time averaged solar heat... ice cover has decreased extensively, particularly in the summer months (from Lee et al. 2012). 13 1. Sea Ice - Albedo Feedback Albedo is a
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-02-12
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-01-01
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Kernel principal component analysis for change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Morton, J.C.
2008-01-01
region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....
Kudo, Rei; Nishizawa, Tomoaki; Aoyagi, Toshinori
2016-07-01
The SKYLIDAR algorithm was developed to estimate vertical profiles of aerosol optical properties from sky radiometer (SKYNET) and lidar (AD-Net) measurements. The solar heating rate was also estimated from the SKYLIDAR retrievals. The algorithm consists of two retrieval steps: (1) columnar properties are retrieved from the sky radiometer measurements and the vertically mean depolarization ratio obtained from the lidar measurements and (2) vertical profiles are retrieved from the lidar measurements and the results of the first step. The derived parameters are the vertical profiles of the size distribution, refractive index (real and imaginary parts), extinction coefficient, single-scattering albedo, and asymmetry factor. Sensitivity tests were conducted by applying the SKYLIDAR algorithm to the simulated sky radiometer and lidar data for vertical profiles of three different aerosols, continental average, transported dust, and pollution aerosols. The vertical profiles of the size distribution, extinction coefficient, and asymmetry factor were well estimated in all cases. The vertical profiles of the refractive index and single-scattering albedo of transported dust, but not those of transported pollution aerosol, were well estimated. To demonstrate the performance and validity of the SKYLIDAR algorithm, we applied the SKYLIDAR algorithm to the actual measurements at Tsukuba, Japan. The detailed vertical structures of the aerosol optical properties and solar heating rate of transported dust and smoke were investigated. Examination of the relationship between the solar heating rate and the aerosol optical properties showed that the vertical profile of the asymmetry factor played an important role in creating vertical variation in the solar heating rate. We then compared the columnar optical properties retrieved with the SKYLIDAR algorithm to those produced with the more established scheme SKYRAD.PACK, and the surface solar irradiance calculated from the SKYLIDAR
A Bayesian inference approach: estimation of heat flux from fin for ...
Indian Academy of Sciences (India)
Harsha Kumar
2018-04-16
Apr 16, 2018 ... The effect of a-priori information on the estimated parameter is also addressed. .... approximation is incorporated to account for the density change as a linear .... estimation, hypothesis testing, decision making and selection of ...
Process for producing metal oxide kernels and kernels so obtained
International Nuclear Information System (INIS)
Lelievre, Bernard; Feugier, Andre.
1974-01-01
The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr
Adaptive Learning in Cartesian Product of Reproducing Kernel Hilbert Spaces
Yukawa, Masahiro
2014-01-01
We propose a novel adaptive learning algorithm based on iterative orthogonal projections in the Cartesian product of multiple reproducing kernel Hilbert spaces (RKHSs). The task is estimating/tracking nonlinear functions which are supposed to contain multiple components such as (i) linear and nonlinear components, (ii) high- and low- frequency components etc. In this case, the use of multiple RKHSs permits a compact representation of multicomponent functions. The proposed algorithm is where t...
Hilbertian kernels and spline functions
Atteia, M
1992-01-01
In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.
Heat-related deaths in hot cities: estimates of human tolerance to high temperature thresholds.
Harlan, Sharon L; Chowell, Gerardo; Yang, Shuo; Petitti, Diana B; Morales Butler, Emmanuel J; Ruddell, Benjamin L; Ruddell, Darren M
2014-03-20
In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90-97 °F; 32.2-36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide.
Lu, Zhao; Sun, Jing; Butts, Kenneth
2016-02-03
A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.
Estimating the CO2 mitigation potential of horizontal Ground Source Heat Pumps in the UK
Garcia-Gonzalez, R.; Verhoef, A.; Vidale, P. L.; Gan, G.; Chong, A.; Clark, D.
2012-04-01
By 2020, the UK will need to generate 15% of its energy from renewables to meet our contribution to the EU renewable energy target. Heating and cooling systems of buildings account for 30%-50% of the global energy consumption; thus, alternative low-carbon technologies such as horizontal Ground Couple Heat Pumps (GCHPs) can contribute to the reduction of anthropogenic CO2 emissions. Horizontal GCHPs currently represent a small fraction of the total energy generation in the UK. However, the fact that semi-detached and detached dwellings represent approximately 40% of the total housing stocks in the UK could make the widespread implementation of this technology particularly attractive in the UK and so could significantly increase its renewable energy generation potential. Using a simulation model, we analysed the dynamic interactions between the environment, the horizontal GCHP heat exchanger and typical UK dwellings, as well as their combined effect on heat pump performance and CO2 mitigation potential. For this purpose, a land surface model (JULES, Joint UK Land Environment Simulator), which calculates coupled soil heat and water fluxes, was combined with a heat extraction model. The analyses took into account the spatio-temporal variability of soil properties (thermal and hydraulic) and meteorological variables, as well as different horizontal GCHP configurations and a variety of building loads and heat demands. Sensitivity tests were performed for four sites in the UK with different climate and soil properties. Our results show that an installation depth of 1.0m would give us higher heat extractions rates, however it would be preferable to install the pipes slightly deeper to avoid the seasonal influence of variable meteorological conditions. A value of 1.5m for the spacing between coils (S) for a slinky configuration type is recommended to avoid thermal disturbances between neighbouring coils. We also found that for larger values of the spacing between the coils
Dense Medium Machine Processing Method for Palm Kernel/ Shell ...
African Journals Online (AJOL)
ADOWIE PERE
Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge; Schuster, Gerard T.
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently
Seo, Yongsuk; DiLeo, Travis; Powell, Jeffrey B; Kim, Jung-Hyun; Roberge, Raymond J; Coca, Aitor
2016-08-01
Monitoring and measuring core body temperature is important to prevent or minimize physiological strain and cognitive dysfunction for workers such as first responders (e.g., firefighters) and military personnel. The purpose of this study is to compare estimated core body temperature (Tco-est), determined by heart rate (HR) data from a wearable chest strap physiology monitor, to standard rectal thermometry (Tre) under different conditions. Tco-est and Tre measurements were obtained in thermoneutral and heat stress conditions (high temperature and relative humidity) during four different experiments including treadmill exercise, cycling exercise, passive heat stress, and treadmill exercise while wearing personal protective equipment (PPE). Overall, the mean Tco-est did not differ significantly from Tre across the four conditions. During exercise at low-moderate work rates under heat stress conditions, Tco-est was consistently higher than Tre at all-time points. Tco-est underestimated temperature compared to Tre at rest in heat stress conditions and at a low work rate under heat stress while wearing PPE. The mean differences between the two measurements ranged from -0.1 ± 0.4 to 0.3 ± 0.4°C and Tco-est correlated well with HR (r = 0.795 - 0.849) and mean body temperature (r = 0.637 - 0.861). These results indicate that, the comparison of Tco-est to Tre may result in over- or underestimation which could possibly lead to heat-related illness during monitoring in certain conditions. Modifications to the current algorithm should be considered to address such issues.
Anthropogenic Heat Flux in South African Cities: Initial estimates from the LUCY model
CSIR Research Space (South Africa)
Padayachi, Yerdashin R
2016-10-01
Full Text Available The anthropogenic heat fluxes (AHF) from buildings, transport and people are an essential component of the urban climate within cities. Presently limited information on the AHF in South African cities exists. This study quantifies the AHF in South...
Evaluation of procedures for estimation of the isosteric heat of adsorption in microporous materials
Krishna, R.
2014-01-01
The major objective of this communication is to evaluate procedures for estn. of the isosteric heat of adsorption, Qst, in microporous materials such as zeolites, metal org. frameworks (MOFs), and zeolitic imidazolate frameworks (ZIFs). For this purpose we have carefully analyzed published exptl.
Application of optimal estimation techniques to FFTF decay heat removal analysis
International Nuclear Information System (INIS)
Nutt, W.T.; Additon, S.L.; Parziale, E.A.
1979-01-01
The verification and adjustment of plant models for decay heat removal analysis using a mix of engineering judgment and formal techniques from control theory are discussed. The formal techniques facilitate dealing with typical test data which are noisy, redundant and do not measure all of the plant model state variables directly. Two pretest examples are presented. 5 refs
Heat-Related Deaths in Hot Cities: Estimates of Human Tolerance to High Temperature Thresholds
Directory of Open Access Journals (Sweden)
Sharon L. Harlan
2014-03-01
Full Text Available In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages <65 and ≥65 during the months May–October for years 2000–2008. The most robust relationship was between ATmax on day of death and mortality from direct exposure to high environmental heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90–97 °F; 32.2‒36.1 °C were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C. Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C and males <65 years (ATmax = 102 °F; 38.9 °C. Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide.
International Nuclear Information System (INIS)
Herszage, A.; Toren, M.
1998-01-01
Estimation of operating conditions for fossil fuel boiler heat exchangers is often required due to changes in working conditions, design modifications and especially for monitoring performance and failure diagnosis. Regular heat exchangers in fossil fuel boilers are composed of tube banks through which water or steam flow, while hot combustion (flue) gases flow outside the tubes. This work presents a top-down approach to operating conditions estimation based on field measurements. An example for a 350 MW unit superheater is thoroughly discussed. Integral calculations based on measurements for all unit heat exchangers (reheaters, superheaters) were performed first. Based on these calculations a scheme of integral conservation equations (lumped parameter) was then formulated at the single tube level. Steady state temperatures of superheater tube walls were obtained as a main output, and were compared to the maximum allowable operating temperatures of the tubes material. A combined lumped parameter - CFD (Computational Fluid Dynamics, FLUENT code) approach constitutes an efficient tool in certain cases. A brief report of such a case is given for another unit superheater. We conclude that steady state evaluations based on both integral and detailed simulations are a valuable monitoring and diagnosis tool for the power generation industry
Ranking Support Vector Machine with Kernel Approximation
Directory of Open Access Journals (Sweden)
Kai Chen
2017-01-01
Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Sentiment classification with interpolated information diffusion kernels
Raaijmakers, S.
2007-01-01
Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of
Evolution kernel for the Dirac field
International Nuclear Information System (INIS)
Baaquie, B.E.
1982-06-01
The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)
Improving the Bandwidth Selection in Kernel Equating
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
Kernel Korner : The Linux keyboard driver
Brouwer, A.E.
1995-01-01
Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the
International Nuclear Information System (INIS)
Prasad, S.K.; Anilkumar, S.; Vajpayee, L.K.; Belhe, M.S.; Yadav, R.K.B.; Deolekar, S.S.
2012-01-01
The primary coolant heat exchanger of Apsara Reactor was in operation for 53 years and as a part of partial decommissioning of Apsara Primary Coolant Heat Exchanger (PHEx) was decommissioned and disposed off as active waste. The long lived component deposited in the SS tubes inside the heat exchanger was assessed by taking the scrape samples and in situ gamma spectrometry technique employing NaI(Tl) detector. The data obtained by experimental measurements were validated by Monte Carlo simulation method. From the present studies, it was shown that 137 Cs and 144 Ce as the major isotopes deposited on the SS tube of heat exchanger. In this paper the authors describes the details of the methodology adopted for the assessment of radioactivity content and the results obtained. This give a reliable method to estimate the activity disposed for waste management accounting purpose in a long and heavy reactor component. The upper bound of total activity in PHEx 39.0μCi. (author)
Metabolic network prediction through pairwise rational kernels.
Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian
2014-09-26
Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy
Dose calculation methods in photon beam therapy using energy deposition kernels
International Nuclear Information System (INIS)
Ahnesjoe, A.
1991-01-01
The problem of calculating accurate dose distributions in treatment planning of megavoltage photon radiation therapy has been studied. New dose calculation algorithms using energy deposition kernels have been developed. The kernels describe the transfer of energy by secondary particles from a primary photon interaction site to its surroundings. Monte Carlo simulations of particle transport have been used for derivation of kernels for primary photon energies form 0.1 MeV to 50 MeV. The trade off between accuracy and calculational speed has been addressed by the development of two algorithms; one point oriented with low computional overhead for interactive use and one for fast and accurate calculation of dose distributions in a 3-dimensional lattice. The latter algorithm models secondary particle transport in heterogeneous tissue by scaling energy deposition kernels with the electron density of the tissue. The accuracy of the methods has been tested using full Monte Carlo simulations for different geometries, and found to be superior to conventional algorithms based on scaling of broad beam dose distributions. Methods have also been developed for characterization of clinical photon beams in entities appropriate for kernel based calculation models. By approximating the spectrum as laterally invariant, an effective spectrum and dose distribution for contaminating charge particles are derived form depth dose distributions measured in water, using analytical constraints. The spectrum is used to calculate kernels by superposition of monoenergetic kernels. The lateral energy fluence distribution is determined by deconvolving measured lateral dose distributions by a corresponding pencil beam kernel. Dose distributions for contaminating photons are described using two different methods, one for estimation of the dose outside of the collimated beam, and the other for calibration of output factors derived from kernel based dose calculations. (au)
Chehbouni, A.; Nichols, W. D.; Qi, J.; Njoku, E. G.; Kerr, Y. H.; Cabot, F.
1994-01-01
The accurate partitioning of available energy into sensible and latent heat flux is crucial to the understanding of surface atmosphere interactions. This issue is more complicated in arid and semi arid regions where the relative contribution to surface fluxes from the soil and vegetation may vary significantly throughout the day and throughout the season. A three component model to estimate sensible heat flux over heterogeneous surfaces is presented. The surface was represented with two adjacent compartments. The first compartment is made up of two components, shrubs and shaded soil, the second of open 'illuminated' soil. Data collected at two different sites in Nevada (U.S.) during the Summers of 1991 and 1992 were used to evaluate model performance. The results show that the present model is sufficiently general to yield satisfactory results for both sites.
International Nuclear Information System (INIS)
Jia Yaofeng; Huang Chunchang; Pang Jiangli; Lu Xinwei; Zhang Xu
2007-01-01
Through various arrangements of pre-heat and cut-heat temperatures in the equivalent-dose estimation of Holocene loess using a Double-SAR dating protocol, the paper estimated the equivalent-doses from several loess samples by application of IRSL and Post-IR OSL signals, respectively. The measured results present that the equivalent-dose depends on the heat temperature, especially depends on the cut-heat temperature, showing the equivalent-dose increases with the cut-heat temperature; a plateau of equivalent-dose appears at the 200-300 degree C preheat temperatures and the 200-240 degree C cut-heat temperatures, furthermore, the equivalent-doses estimated by IRSL and Post-IR OSL signals respectively are close to each other, which resulted from the similar sensitivity change directions of optical stimulated signals and their smaller change ranges in the measurement cycles using the various temperatures of pre-heat and cut-heat. This suggests that the 200-300 degree C pre-heat temperatures and the 200-240 degree C cut-heat temperatures are fit for dating young Holocene loess samples. (authors)
Fission yields data generation and benchmarks of decay heat estimation of a nuclear fuel
Gil, Choong-Sup; Kim, Do Heon; Yoo, Jae Kwon; Lee, Jounghwa
2017-09-01
Fission yields data with the ENDF-6 format of 235U, 239Pu, and several actinides dependent on incident neutron energies have been generated using the GEF code. In addition, fission yields data libraries of ORIGEN-S, -ARP modules in the SCALE code, have been generated with the new data. The decay heats by ORIGEN-S using the new fission yields data have been calculated and compared with the measured data for validation in this study. The fission yields data ORIGEN-S libraries based on ENDF/B-VII.1, JEFF-3.1.1, and JENDL/FPY-2011 have also been generated, and decay heats were calculated using the ORIGEN-S libraries for analyses and comparisons.
Probabilistic wind power forecasting based on logarithmic transformation and boundary kernel
International Nuclear Information System (INIS)
Zhang, Yao; Wang, Jianxue; Luo, Xu
2015-01-01
Highlights: • Quantitative information on the uncertainty of wind power generation. • Kernel density estimator provides non-Gaussian predictive distributions. • Logarithmic transformation reduces the skewness of wind power density. • Boundary kernel method eliminates the density leakage near the boundary. - Abstracts: Probabilistic wind power forecasting not only produces the expectation of wind power output, but also gives quantitative information on the associated uncertainty, which is essential for making better decisions about power system and market operations with the increasing penetration of wind power generation. This paper presents a novel kernel density estimator for probabilistic wind power forecasting, addressing two characteristics of wind power which have adverse impacts on the forecast accuracy, namely, the heavily skewed and double-bounded nature of wind power density. Logarithmic transformation is used to reduce the skewness of wind power density, which improves the effectiveness of the kernel density estimator in a transformed scale. Transformations partially relieve the boundary effect problem of the kernel density estimator caused by the double-bounded nature of wind power density. However, the case study shows that there are still some serious problems of density leakage after the transformation. In order to solve this problem in the transformed scale, a boundary kernel method is employed to eliminate the density leak at the bounds of wind power distribution. The improvement of the proposed method over the standard kernel density estimator is demonstrated by short-term probabilistic forecasting results based on the data from an actual wind farm. Then, a detailed comparison is carried out of the proposed method and some existing probabilistic forecasting methods
On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2017-06-01
The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.
Singh, Ajay V; Gollner, Michael J
2016-06-01
Modeling the realistic burning behavior of condensed-phase fuels has remained out of reach, in part because of an inability to resolve the complex interactions occurring at the interface between gas-phase flames and condensed-phase fuels. The current research provides a technique to explore the dynamic relationship between a combustible condensed fuel surface and gas-phase flames in laminar boundary layers. Experiments have previously been conducted in both forced and free convective environments over both solid and liquid fuels. A unique methodology, based on the Reynolds Analogy, was used to estimate local mass burning rates and flame heat fluxes for these laminar boundary layer diffusion flames utilizing local temperature gradients at the fuel surface. Local mass burning rates and convective and radiative heat feedback from the flames were measured in both the pyrolysis and plume regions by using temperature gradients mapped near the wall by a two-axis traverse system. These experiments are time-consuming and can be challenging to design as the condensed fuel surface burns steadily for only a limited period of time following ignition. The temperature profiles near the fuel surface need to be mapped during steady burning of a condensed fuel surface at a very high spatial resolution in order to capture reasonable estimates of local temperature gradients. Careful corrections for radiative heat losses from the thermocouples are also essential for accurate measurements. For these reasons, the whole experimental setup needs to be automated with a computer-controlled traverse mechanism, eliminating most errors due to positioning of a micro-thermocouple. An outline of steps to reproducibly capture near-wall temperature gradients and use them to assess local burning rates and heat fluxes is provided.
Combined use of heat and saline tracer to estimate aquifer properties in a forced gradient test
Colombani, N.; Giambastiani, B. M. S.; Mastrocicco, M.
2015-06-01
Usually electrolytic tracers are employed for subsurface characterization, but the interpretation of tracer test data collected by low cost techniques, such as electrical conductivity logging, can be biased by cation exchange reactions. To characterize the aquifer transport properties a saline and heat forced gradient test was employed. The field site, located near Ferrara (Northern Italy), is a well characterized site, which covers an area of 200 m2 and is equipped with a grid of 13 monitoring wells. A two-well (injection and pumping) system was employed to perform the forced gradient test and a straddle packer was installed in the injection well to avoid in-well artificial mixing. The contemporary continuous monitor of hydraulic head, electrical conductivity and temperature within the wells permitted to obtain a robust dataset, which was then used to accurately simulate injection conditions, to calibrate a 3D transient flow and transport model and to obtain aquifer properties at small scale. The transient groundwater flow and solute-heat transport model was built using SEAWAT. The result significance was further investigated by comparing the results with already published column experiments and a natural gradient tracer test performed in the same field. The test procedure shown here can provide a fast and low cost technique to characterize coarse grain aquifer properties, although some limitations can be highlighted, such as the small value of the dispersion coefficient compared to values obtained by natural gradient tracer test, or the fast depletion of heat signal due to high thermal diffusivity.
DBKGrad: An R Package for Mortality Rates Graduation by Discrete Beta Kernel Techniques
Directory of Open Access Journals (Sweden)
Angelo Mazza
2014-04-01
Full Text Available We introduce the R package DBKGrad, conceived to facilitate the use of kernel smoothing in graduating mortality rates. The package implements univariate and bivariate adaptive discrete beta kernel estimators. Discrete kernels have been preferred because, in this context, variables such as age, calendar year and duration, are pragmatically considered as discrete and the use of beta kernels is motivated since it reduces boundary bias. Furthermore, when data on exposures to the risk of death are available, the use of adaptive bandwidth, that may be selected by cross-validation, can provide additional benefits. To exemplify the use of the package, an application to Italian mortality rates, for different ages and calendar years, is presented.
Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid
2018-06-01
This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.
Tong, Kangkang; Fang, Andrew; Yu, Huajun; Li, Yang; Shi, Lei; Wang, Yangjun; Wang, Shuxiao; Ramaswami, Anu
2017-12-01
Utilizing low-grade waste heat from industries to heat and cool homes and businesses through fourth generation district energy systems (DES) is a novel strategy to reduce energy use. This paper develops a generalizable methodology to estimate the energy saving potential for heating/cooling in 20 cities in two Chinese provinces, representing cold winter and hot summer regions respectively. We also conduct a life-cycle analysis of the new infrastructure required for energy exchange in DES. Results show that heating and cooling energy use reduction from this waste heat exchange strategy varies widely based on the mix of industrial, residential and commercial activities, and climate conditions in cities. Low-grade heat is found to be the dominant component of waste heat released by industries, which can be reused for both district heating and cooling in fourth generation DES, yielding energy use reductions from 12%-91% (average of 58%) for heating and 24%-100% (average of 73%) for cooling energy use in the different cities based on annual exchange potential. Incorporating seasonality and multiple energy exchange pathways resulted in energy savings reductions from 0%-87%. The life-cycle impact of added infrastructure was small (<3% for heating) and 1.9% ~ 6.5% (cooling) of the carbon emissions from fuel use in current heating or cooling systems, indicating net carbon savings. This generalizable approach to delineate waste heat potential can help determine suitable cities for the widespread application of industrial waste heat re-utilization.
International Nuclear Information System (INIS)
Ahn, Yoonhan; Cho, Seong Kuk; Lee, Jeong Ik
2015-01-01
The heat sink temperature conditions are referred from the annual database of sea water temperature in East sea. When the heat sink temperature increases, the compressor inlet temperature can be influenced and the sudden power decrease can happen due to the large water pumping power. When designing the water pump, the pumping margin should be considered as well. As a part of Prototype Generation IV Sodium-cooled Fast Reactor (PG-SFR) development, the Supercritical CO 2 cycle (S-CO 2 ) is considered as one of the promising candidate that can potentially replace the steam Rankine cycle. S-CO 2 cycle can achieve distinctively high efficiency compared to other Brayton cycles and even competitive performance to the steam Rankine cycle under the mild turbine inlet temperature region. Previous studies explored the optimum size of the S-CO 2 cycle considering component designs including turbomachinery, heat exchangers and pipes. Based on the preliminary design, the thermal efficiency is 31.5% when CO 2 is sufficiently cooled to the design temperature. However, the S-CO 2 compressor performance is highly influenced by the inlet temperature and the compressor inlet temperature can be changed when the heat sink temperature, in this case sea water temperature varies. To estimate the S-CO 2 cycle performance of PG-SFR in the various regions, a Quasi-static system analysis code for S-CO 2 cycle is developed by the KAIST research team. A S-CO 2 cycle for PG-SFR is designed and assessed for off-design performance with the heat sink temperature variation
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.
Lu, Yang; Dong, Jianzhi; Steele-Dunne, Susan; van de Giesen, Nick
2016-04-01
This study is focused on estimating surface sensible and latent heat fluxes from land surface temperature (LST) time series and soil moisture observations. Surface turbulent heat fluxes interact with the overlying atmosphere and play a crucial role in meteorology, hydrology and other climate-related fields, but in-situ measurements are costly and difficult. It has been demonstrated that the time series of LST contains information of energy partitioning and that surface turbulent heat fluxes can be determined from assimilation of LST. These studies are mainly based on two assumptions: (1) a monthly value of bulk heat transfer coefficient under neutral conditions (CHN) which scales the sum of the fluxes, and (2) an evaporation fraction (EF) which stays constant during the near-peak hours of the day. Previous studies have applied variational and ensemble approaches to this problem. Here the newly developed particle batch smoother (PBS) algorithm is adopted to test its capability in this application. The PBS can be seen as an extension of the standard particle filter (PF) in which the states and parameters within a fix window are updated in a batch using all observations in the window. The aim of this study is two-fold. First, the PBS is used to assimilate only LST time series into the force-restore model to estimate fluxes. Second, a simple soil water transfer scheme is introduced to evaluate the benefit of assimilating soil moisture observations simultaneously. The experiments are implemented using the First ISLSCP (International Satellite Land Surface Climatology Project) (FIFE) data. It is shown that the restored LST time series using PBS agrees very well with observations, and that assimilating LST significantly improved the flux estimation at both daily and half-hourly time scales. When soil moisture is introduced to further constrain EF, the accuracy of estimated EF is greatly improved. Furthermore, the RMSEs of retrieved fluxes are effectively reduced at both
Cross, Alan; Collard, Mark; Nelson, Andrew
2008-06-18
The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached.
Directory of Open Access Journals (Sweden)
Alan Cross
Full Text Available The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be
GPM SLH: Convective Latent Heating Estimated with GPM Dual-frequency Precipitation Radar Data
Takayabu, Y. N.; Hamada, A.; Yokoyama, C.; Ikuta, Y.; Shige, S.; Yamaji, M.; Kubota, T.
2017-12-01
Three dimensional diabatic heating distribution plays essential roles to determine large-scale circulation, as well as to generate mesoscale circulation associated with tropical convection (e.g. Hartmann et al., 1984; Houze et al. 1982). For mid-latitude systems also, diabatic heating contributes to generate PVs resulting in, for example, explosive intensifications of mid-lattitude storms (Boettcher and Wernli, 2011). Previously, with TRMM PR data, we developed a Spectral Latent Heating algorithm (SLH; Shige et al. 2004, etc.) for 36N-36S region. It was based on the spectral LH tables produced from a simulation utilizing the Goddard Cloud Ensemble Model forced with the TOGA-COARE data. With GPM DPR, the observation region is extended to 65N-65S. Here, we introduce a new version of SLH algorithm which is applicable also to the mid-latitude precipitation. A new global GPM SLH ver.5 product is released as one of NASA/JAXA GPM standard products on July 11, 2017. For GPM SLH mid-latitude algorithm, we employ the Japan Meteorological Agency (JMA)'s high resolution (horizontally 2km) Local Forecast Model (LFM) to construct the LUTs. With collaborations of JMA's forecast group, forecast data for 8 extratropical cyclone cases are collected and utilized. For mid-latitude precipitation, we have to deal with large temperature gradients and complex relationship between the freezing level and cloud base levels. LUTs are constructed for LH, Q1-QR, and Q2 (Yanai et al. 1973), for six different precipitation types: Convective and shallow stratiform LUTs are made against precipitation top heights. For deep stratiform and other precipitation, LUTs are made against maximum precipitation to handle the unknown cloud-bases. Finally, three-dimensional convective latent heating is retrieved, utilizing the LUTs and precipitation profile data from GPM 2AKu. We can confirm that retrieved LH looks very similar to simulated LH, for a consistency check. We also confirm a good continuities of
Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization
Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin
2017-02-01
To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.
Digital Repository Service at National Institute of Oceanography (India)
Shenoi, S.S.C.; Shankar, D.; Shetye, S.R.
The accuracy of data from the Simple Ocean Data Assimilation (SODA) model for estimating the heat budget of the upper ocean is tested in the Arabian Sea and the Bay of Bengal. SODA is able to reproduce the changes in heat content when...
Putting Priors in Mixture Density Mercer Kernels
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Anisotropic hydrodynamics with a scalar collisional kernel
Almaalol, Dekrayat; Strickland, Michael
2018-04-01
Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.
Caspers, Friedhelm; Ruggiero, F; Tan, J
1999-01-01
An estimate of the resistive losses in the LHC beam screen is given from cold surface resistance measurements using the shielded pair technique, with particular emphasis on the effect of a high magnetic field. Two different copper coating methods, namely electro-deposition and co-lamination, have been evaluated. Experimental data are compared with theories including the anomalous skin effect and the magneto-resistance effect. It is shown whether the theory underestimates or not the losses depends strongly on the RRR value, on the magnetic field and on the surface characteristics. In the pessimistic case and for nominal machine parameters, the estimated beam-induced resistive wall heating can be as large as 260 mW/m for two circulating beams.
Directory of Open Access Journals (Sweden)
G. Meneghetti
2016-01-01
Full Text Available Fatigue crack initiation and propagation involve plastic strains that require some work to be done on the material. Most of this irreversible energy is dissipated as heat and consequently the material temperature increases. The heat being an indicator of the intense plastic strains occurring at the tip of a propagating fatigue crack, when combined with the Neuber’s structural volume concept, it might be used as an experimentally measurable parameter to assess the fatigue damage accumulation rate of cracked components. On the basis of a theoretical model published previously, in this work the heat energy dissipated in a volume surrounding the crack tip is estimated experimentally on the basis of the radial temperature profiles measured by means of an infrared camera. The definition of the structural volume in a fatigue sense is beyond the scope of the present paper. The experimental crack propagation tests were carried out on hot-rolled, 6-mm-thick AISI 304L stainless steel specimens subject to completely reversed axial fatigue loading.
International Nuclear Information System (INIS)
Chen Qiang; Ren Xuemei; Na Jing
2011-01-01
Highlights: Model uncertainty of the system is approximated by multiple-kernel LSSVM. Approximation errors and disturbances are compensated in the controller design. Asymptotical anti-synchronization is achieved with model uncertainty and disturbances. Abstract: In this paper, we propose a robust anti-synchronization scheme based on multiple-kernel least squares support vector machine (MK-LSSVM) modeling for two uncertain chaotic systems. The multiple-kernel regression, which is a linear combination of basic kernels, is designed to approximate system uncertainties by constructing a multiple-kernel Lagrangian function and computing the corresponding regression parameters. Then, a robust feedback control based on MK-LSSVM modeling is presented and an improved update law is employed to estimate the unknown bound of the approximation error. The proposed control scheme can guarantee the asymptotic convergence of the anti-synchronization errors in the presence of system uncertainties and external disturbances. Numerical examples are provided to show the effectiveness of the proposed method.
Ahmed, Qasim Zeeshan
2013-01-01
In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.
International Nuclear Information System (INIS)
Jaeger, Wadim; Espinoza, Victor H. Sanchez; Schneider, Niko; Hurtado, Antonio
2009-01-01
Within the frame of the Generation IV international forum six innovative reactor concepts are the subject of comprehensive investigations. In some projects supercritical water will be considered as coolant, moderator (as for the High Performance Light Water Reactor) or secondary working fluid (one possible option for Liquid Metal-cooled Fast Reactors). Supercritical water is characterized by a pronounced change of the thermo-physical properties when crossing the pseudo-critical line, which goes hand in hand with a change in the heat transfer (HT) behavior. Hence, it is essential to estimate, in a proper way, the heat-transfer coefficient and subsequently the wall temperature. The scope of this paper is to present and discuss the activities at the Institute for Reactor Safety (IRS) related to the implementation of correlations for wall-to-fluid HT at supercritical conditions in Best-Estimate codes like TRACE as well as its validation. It is important to validate TRACE before applying it to safety analyses of HPLWR or of other reactor systems. In the past 3 decades various experiments have been performed all over the world to reveal the peculiarities of wall-to-fluid HT at supercritical conditions. Several different heat transfer phenomena such as HT enhancement (due to higher Prandtl numbers in the vicinity of the pseudo-critical point) or HT deterioration (due to strong property variations) were observed. Since TRACE is a component based system code with a finite volume method the resolution capabilities are limited and not all physical phenomena can be modeled properly. But Best -Estimate system codes are nowadays the preferred option for safety related investigations of full plants or other integral systems. Thus, the increase of the confidence in such codes is of high priority. In this paper, the post-test analysis of experiments with supercritical parameters will be presented. For that reason various correlations for the HT, which considers the characteristics
Energy Technology Data Exchange (ETDEWEB)
Davoudi, Mehdi, E-mail: mehdi.davoudi@polimi.it [Department of Electrical and Computer Engineering, Buein Zahra Technical University, Buein Zahra, Qazvin (Iran, Islamic Republic of); Davoudi, Mohsen, E-mail: davoudi@eng.ikiu.ac.ir [Department of Electrical Engineering, Imam Khomeini International University, Qazvin, 34148-96818 (Iran, Islamic Republic of)
2017-06-15
Highlights: • A couple of algorithms to diagnose if Electron Cyclotron Heating (ECH) power is deposited properly on the expected deposition minor radius are proposed. • The algorithms are based on Bayesian theory and Fuzzy logic. • The algorithms are tested on the off-line experimental data acquired from Frascati Tokamak Upgrade (FTU), Frascati, Italy. • Uncertainties and evidences derived from the combination of online information formed by the measured diagnostic data and the prior information are also estimated. - Abstract: In the thermonuclear fusion systems, the new plasma control systems use some measured on-line information acquired from different sensors and prior information obtained by predictive plasma models in order to stabilize magnetic hydro dynamics (MHD) activity in a tokamak. Suppression of plasma instabilities is a key issue to improve the confinement time of controlled thermonuclear fusion with tokamaks. This paper proposes a couple of algorithms based on Bayesian theory and Fuzzy logic to diagnose if Electron Cyclotron Heating (ECH) power is deposited properly on the expected deposition minor radius (r{sub DEP}). Both algorithms also estimate uncertainties and evidences derived from the combination of the online information formed by the measured diagnostic data and the prior information. The algorithms have been employed on a set of off-line ECE channels data which have been acquired from the experimental shot number 21364 at Frascati Tokamak Upgrade (FTU), Frascati, Italy.
Cysewski, Piotr
2016-07-01
The values of excess heat characterizing sets of 493 simple binary eutectic mixtures and 965 cocrystals were estimated under super cooled liquid condition. The application of a confusion matrix as a predictive analytical tool was applied for distinguishing between the two subsets. Among seven considered levels of computations the BP-TZVPD-FINE approach was found to be the most precise in terms of the lowest percentage of misclassified positive cases. Also much less computationally demanding AM1 and PM7 semiempirical quantum chemistry methods are likewise worth considering for estimation of the heat of mixing values. Despite intrinsic limitations of the approach of modeling miscibility in the solid state, based on components affinities in liquids under super cooled conditions, it is possible to define adequate criterions for classification of coformers pairs as simple binary eutectics or cocrystals. The predicted precision has been found as 12.8% what is quite accepted, bearing in mind simplicity of the approach. However, tuning theoretical screening to such precision implies the exclusion of many positive cases and this wastage exceeds 31% of cocrystals classified as false negatives. Copyright © 2016 Elsevier Inc. All rights reserved.
Croce, Olivier; Chevenet, François; Christen, Richard
2008-07-01
The efficiency of molecular methods involving DNA/DNA hybridizations depends on the accurate prediction of the melting temperature (T(m)) of the duplex. Many softwares are available for T(m) calculations, but difficulties arise when one wishes to check if a given oligomer (PCR primer or probe) hybridizes well or not on more than a single sequence. Moreover, the presence of mismatches within the duplex is not sufficient to estimate specificity as it does not always significantly decrease the T(m). OHM (OligoHeatMap) is an online tool able to provide estimates of T(m) for a set of oligomers and a set of aligned sequences, not only as text files of complete results but also in a graphical way: T(m) values are translated into colors and displayed as a heat map image, either stand alone or to be used by softwares such as TreeDyn to be included in a phylogenetic tree. OHM is freely available at http://bioinfo.unice.fr/ohm/, with links to the full source code and online help.
Directory of Open Access Journals (Sweden)
Xuanyu Wang
2017-12-01
Full Text Available Terrestrial latent heat flux (LE is a key component of the global terrestrial water, energy, and carbon exchanges. Accurate estimation of LE from moderate resolution imaging spectroradiometer (MODIS data remains a major challenge. In this study, we estimated the daily LE for different plant functional types (PFTs across North America using three machine learning algorithms: artificial neural network (ANN; support vector machines (SVM; and, multivariate adaptive regression spline (MARS driven by MODIS and Modern Era Retrospective Analysis for Research and Applications (MERRA meteorology data. These three predictive algorithms, which were trained and validated using observed LE over the period 2000–2007, all proved to be accurate. However, ANN outperformed the other two algorithms for the majority of the tested configurations for most PFTs and was the only method that arrived at 80% precision for LE estimation. We also applied three machine learning algorithms for MODIS data and MERRA meteorology to map the average annual terrestrial LE of North America during 2002–2004 using a spatial resolution of 0.05°, which proved to be useful for estimating the long-term LE over North America.
Some estimates of mirror plasma startup by neutral beam heating of pellet and gas cloud targets
International Nuclear Information System (INIS)
Shearer, J.W.; Willmann, P.A.
1978-01-01
Hot plasma buildup by neutral beam injection into an initially cold solid or gaseous target is found to be conceivable in large mirror machine experiments such as 2XIIB or MFTF. A simple analysis shows that existing neutral beam intensities are sufficient to ablate suitable targets to form a gas or vapor cloud. An approximate rate equation model is used to follow the subsequent processes of ionization, heating, and hot plasma formation. Solutions of these rate equations are obtained by means of the ''GEAR'' techniques for solving ''stiff'' systems of differential equations. These solutions are in rough agreement with the 2XIIB stream plasma buildup experiment. They also predict that buildup on a suitable nitrogen-like target will occur in the MFTF geometry. In 2XIIB the solutions are marginal; buildup may be possible, but is not certain
Analyzing kernel matrices for the identification of differentially expressed genes.
Directory of Open Access Journals (Sweden)
Xiao-Lei Xia
Full Text Available One of the most important applications of microarray data is the class prediction of biological samples. For this purpose, statistical tests have often been applied to identify the differentially expressed genes (DEGs, followed by the employment of the state-of-the-art learning machines including the Support Vector Machines (SVM in particular. The SVM is a typical sample-based classifier whose performance comes down to how discriminant samples are. However, DEGs identified by statistical tests are not guaranteed to result in a training dataset composed of discriminant samples. To tackle this problem, a novel gene ranking method namely the Kernel Matrix Gene Selection (KMGS is proposed. The rationale of the method, which roots in the fundamental ideas of the SVM algorithm, is described. The notion of ''the separability of a sample'' which is estimated by performing [Formula: see text]-like statistics on each column of the kernel matrix, is first introduced. The separability of a classification problem is then measured, from which the significance of a specific gene is deduced. Also described is a method of Kernel Matrix Sequential Forward Selection (KMSFS which shares the KMGS method's essential ideas but proceeds in a greedy manner. On three public microarray datasets, our proposed algorithms achieved noticeably competitive performance in terms of the B.632+ error rate.
A kernel plus method for quantifying wind turbine performance upgrades
Lee, Giwhyun
2014-04-21
Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.
NLO corrections to the Kernel of the BKP-equations
Energy Technology Data Exchange (ETDEWEB)
Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)
2012-10-02
We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.
Kernel maximum autocorrelation factor and minimum noise fraction transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...
7 CFR 51.2296 - Three-fourths half kernel.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...
7 CFR 981.401 - Adjusted kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...
7 CFR 51.1403 - Kernel color classification.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...
The Linux kernel as flexible product-line architecture
M. de Jonge (Merijn)
2002-01-01
textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what
Detoxification of Jatropha curcas kernel cake by a novel Streptomyces fimicarius strain.
Wang, Xing-Hong; Ou, Lingcheng; Fu, Liang-Liang; Zheng, Shui; Lou, Ji-Dong; Gomes-Laranjo, José; Li, Jiao; Zhang, Changhe
2013-09-15
A huge amount of kernel cake, which contains a variety of toxins including phorbol esters (tumor promoters), is projected to be generated yearly in the near future by the Jatropha biodiesel industry. We showed that the kernel cake strongly inhibited plant seed germination and root growth and was highly toxic to carp fingerlings, even though phorbol esters were undetectable by HPLC. Therefore it must be detoxified before disposal to the environment. A mathematic model was established to estimate the general toxicity of the kernel cake by determining the survival time of carp fingerling. A new strain (Streptomyces fimicarius YUCM 310038) capable of degrading the total toxicity by more than 97% in a 9-day solid state fermentation was screened out from 578 strains including 198 known strains and 380 strains isolated from air and soil. The kernel cake fermented by YUCM 310038 was nontoxic to plants and carp fingerlings and significantly promoted tobacco plant growth, indicating its potential to transform the toxic kernel cake to bio-safe animal feed or organic fertilizer to remove the environmental concern and to reduce the cost of the Jatropha biodiesel industry. Microbial strain profile essential for the kernel cake detoxification was discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
de Oliveira, R L; de Carvalho, G G P; Oliveira, R L; Tosto, M S L; Santos, E M; Ribeiro, R D X; Silva, T M; Correia, B R; de Rufino, L M A
2017-10-01
The objective of this study was to evaluate the effects of the inclusion of palm kernel (Elaeis guineensis) cake in diets for goats on feeding behaviors, rectal temperature, and cardiac and respiratory frequencies. Forty crossbred Boer male, non-castrated goats (ten animals per treatment), with an average age of 90 days and an initial body weight of 15.01 ± 1.76 kg, were used. The goats were fed Tifton 85 (Cynodon spp.) hay and palm kernel supplemented at the rates of 0, 7, 14, and 21% of dry matter (DM). The feeding behaviors (rumination, feeding, and idling times) were observed for three 24-h periods. DM and neutral detergent fiber (NDF) intake values were estimated as the difference between the total DM and NDF contents of the feed offered and the total DM and NDF contents of the orts. There was no effect of palm kernel cake inclusion in goat diets on DM intake (P > 0.05). However, palm kernel cake promoted a linear increase (P kernel cakes had no effects (P > 0.05) on the chewing, feeding, and rumination efficiency (DM and NDF) or on physiological variables. The use up to 21% palm kernel cake in the diet of crossbred Boer goats maintained the feeding behaviors and did not change the physiological parameters of goats; therefore, its use is recommended in the diet of these animals.
Digital signal processing with kernel methods
Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo
2018-01-01
A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...
Parsimonious Wavelet Kernel Extreme Learning Machine
Directory of Open Access Journals (Sweden)
Wang Qin
2015-11-01
Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.
Ensemble Approach to Building Mercer Kernels
National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...
International Nuclear Information System (INIS)
Chen, W.-L.; Yang, Y.-C.
2009-01-01
In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown space- and time-dependent heat-transfer rate on the surface of the insulation layer of a double circular pipe heat exchanger using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown heat-transfer rate; hence the procedure is classified as the function estimation in inverse calculation. The temperature data obtained from the direct problem are used to simulate the temperature measurements. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation on the space- and time-dependent heat-transfer rate can be obtained for the test case considered in this study.
Control Transfer in Operating System Kernels
1994-05-13
microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating
Uranium kernel formation via internal gelation
International Nuclear Information System (INIS)
Hunt, R.D.; Collins, J.L.
2004-01-01
In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)
Quantum tomography, phase-space observables and generalized Markov kernels
International Nuclear Information System (INIS)
Pellonpaeae, Juha-Pekka
2009-01-01
We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.
Considering a non-polynomial basis for local kernel regression problem
Silalahi, Divo Dharma; Midi, Habshah
2017-01-01
A common used as solution for local kernel nonparametric regression problem is given using polynomial regression. In this study, we demonstrated the estimator and properties using maximum likelihood estimator for a non-polynomial basis such B-spline to replacing the polynomial basis. This estimator allows for flexibility in the selection of a bandwidth and a knot. The best estimator was selected by finding an optimal bandwidth and knot through minimizing the famous generalized validation function.
Jung, Jooyeoun; Wang, Wenjie; McGorrin, Robert J; Zhao, Yanyun
2018-02-01
Moisture adsorption isotherms and storability of dried hazelnut inshells and kernels produced in Oregon were evaluated and compared among cultivars, including Barcelona, Yamhill, and Jefferson. Experimental moisture adsorption data fitted to Guggenheim-Anderson-de Boer (GAB) model, showing less hygroscopic properties in Yamhill than other cultivars of inshells and kernels due to lower content of carbohydrate and protein, but higher content of fat. The safe levels of moisture content (MC, dry basis) of dried inshells and kernels for reaching kernel water activity (a w ) ≤0.65 were estimated using the GAB model as 11.3% and 5.0% for Barcelona, 9.4% and 4.2% for Yamhill, and 10.7% and 4.9% for Jefferson, respectively. Storage conditions (2 °C at 85% to 95% relative humidity [RH], 10 °C at 65% to 75% RH, and 27 °C at 35% to 45% RH), times (0, 4, 8, or 12 mo), and packaging methods (atmosphere vs. vacuum) affected MC, a w , bioactive compounds, lipid oxidation, and enzyme activity of dried hazelnut inshells or kernels. For inshells packaged at woven polypropylene bag, MC and a w of inshells and kernels (inside shells) increased at 2 and 10 °C, but decreased at 27 °C during storage. For kernels, lipid oxidation and polyphenol oxidase activity also increased with extended storage time (P adsorption and physicochemical and enzymatic stability during storage. Moisture adsorption isotherm of hazelnut inshells and kernels is useful for predicting the storability of nuts. This study found that water adsorption and storability varied among the different cultivars of nuts, in which Yamhill was less hygroscopic than Barcelona and Jefferson, thus more stable during storage. For ensuring food safety and quality of nuts during storage, each cultivar of kernels should be dried to a certain level of MC. Lipid oxidation and enzyme activity of kernel could be increased with extended storage time. Vacuum packaging was recommended to kernels for reducing moisture adsorption
Sitompul, Monica Angelina
2015-01-01
Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...
Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten
2017-05-19
In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between
International Nuclear Information System (INIS)
Ohki, Naohisa; Harayama, Yasuo; Takeda, Tsuneo; Izumi, Fumio.
1977-12-01
In safety evaluation of a fuel rod, estimation of the stored energy in the fuel rod is indispensable. For this estimation, the temperature distribution in the fuel rod is calculated. Most important in determination of the temperature distribution is the gap heat transfer coefficient (gap conductance) between pellet surface and cladding inner surface. Under fuel rod operating condition, the mixed gas in the gap is composed of He, Xe and Kr. He is initial seald gas. Xe and Kr are fission-product gases, of which the quantities depend on the fuel burn-up. In program GAPCON series (GAPCON and GAPCON-THERMAL-1 and -2) and FREG-3, these quantities are given as a function of the irradiation time, power rating and neutron flux in estimation of the thermal conductivity of the mixed gas. The methods of calculating the quantities of Xe and Kr in the programs have been examined. Input of the neutron flux which influences F.P. gas production rates is better than the determination from the fuel-rod power rating. (auth.)
Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J Harry
2016-01-01
Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types.
Yoshioka, Mayumi; Takakura, Shinichi; Uchida, Youhei
2018-05-01
To estimate the groundwater flow around a borehole heat exchanger (BHE), thermal properties of geological core samples were measured and a thermal response test (TRT) was performed in the Tsukuba upland, Japan. The thermal properties were measured at 57 points along a 50-m-long geological core, consisting predominantly of sand, silt, and clay, drilled near the BHE. In this TRT, the vertical temperature in the BHE was also monitored during and after the test. Results for the thermal properties of the core samples and from the monitoring indicated that groundwater flow enhanced thermal transfers, especially at shallow depths. The groundwater velocities around the BHE were estimated using a two-dimensional numerical model with monitoring data on temperature changes. According to the results, the estimated groundwater velocity was generally consistent with hydrogeological data from previous studies, except for the data collected at shallow depths consisting of a clay layer. The reasons for this discrepancy at shallow depths were predicted to be preferential flow and the occurrence of vertical flow through the BHE grout, induced by the hydrogeological conditions.
A Experimental Study of the Growth of Laser Spark and Electric Spark Ignited Flame Kernels.
Ho, Chi Ming
1995-01-01
Better ignition sources are constantly in demand for enhancing the spark ignition in practical applications such as automotive and liquid rocket engines. In response to this practical challenge, the present experimental study was conducted with the major objective to obtain a better understanding on how spark formation and hence spark characteristics affect the flame kernel growth. Two laser sparks and one electric spark were studied in air, propane-air, propane -air-nitrogen, methane-air, and methane-oxygen mixtures that were initially at ambient pressure and temperature. The growth of the kernels was monitored by imaging the kernels with shadowgraph systems, and by imaging the planar laser -induced fluorescence of the hydroxyl radicals inside the kernels. Characteristic dimensions and kernel structures were obtained from these images. Since different energy transfer mechanisms are involved in the formation of a laser spark as compared to that of an electric spark; a laser spark is insensitive to changes in mixture ratio and mixture type, while an electric spark is sensitive to changes in both. The detailed structures of the kernels in air and propane-air mixtures primarily depend on the spark characteristics. But the combustion heat released rapidly in methane-oxygen mixtures significantly modifies the kernel structure. Uneven spark energy distribution causes remarkably asymmetric kernel structure. The breakdown energy of a spark creates a blast wave that shows good agreement with the numerical point blast solution, and a succeeding complex spark-induced flow that agrees reasonably well with a simple puff model. The transient growth rates of the propane-air, propane-air -nitrogen, and methane-air flame kernels can be interpreted in terms of spark effects, flame stretch, and preferential diffusion. For a given mixture, a spark with higher breakdown energy produces a greater and longer-lasting enhancing effect on the kernel growth rate. By comparing the growth
International Nuclear Information System (INIS)
Gugiu, D.; Dumitrache, I.
2005-01-01
The present work is a part of a more complex project related to the replacement of the original stainless steel adjuster rods with cobalt assemblies in the CANDU 6 reactor core. The 60 Co produced by 59 Co irradiation could be used extensively in medicine and industry. The paper will mainly describe some of the reactor physics and safety requirements that must be carried into practice for the Co adjuster rods. The computations related to the neutronic equivalence of the stainless steel adjusters with the Co adjuster assemblies, as well as the estimations of the activity and heating of the irradiated cobalt rods, are performed using the Monte Carlo codes MCNP5 and MONTEBURNS 2.1. The activity values are used to evaluate the dose at the surface of the device designed to transport the cobalt adjusters. (authors)
Estimation of the dust production rate from the tungsten armour after repetitive ELM-like heat loads
Pestchanyi, S.; Garkusha, I.; Makhlaj, V.; Landman, I.
2011-12-01
Experimental simulations for the erosion rate of tungsten targets under ITER edge-localized mode (ELM)-like surface heat loads of 0.75 MJ m-2 causing surface melting and of 0.45 MJ m-2 without melting have been performed in the QSPA-Kh50 plasma accelerator. Analytical considerations allow us to conclude that for both energy deposition values the erosion mechanism is solid dust ejection during surface cracking under the action of thermo-stress. Tungsten influx into the ITER containment of NW~5×1018 W per medium size ELM of 0.75 MJ m-2 and 0.25 ms time duration has been estimated. The radiation cooling power of Prad=150-300 MW due to such influx of tungsten is intolerable: it should cool the ITER core to 1 keV within a few seconds.
Estimation of the dust production rate from the tungsten armour after repetitive ELM-like heat loads
International Nuclear Information System (INIS)
Pestchanyi, S; Landman, I; Garkusha, I; Makhlaj, V
2011-01-01
Experimental simulations for the erosion rate of tungsten targets under ITER edge-localized mode (ELM)-like surface heat loads of 0.75 MJ m - 2 causing surface melting and of 0.45 MJ m - 2 without melting have been performed in the QSPA-Kh50 plasma accelerator. Analytical considerations allow us to conclude that for both energy deposition values the erosion mechanism is solid dust ejection during surface cracking under the action of thermo-stress. Tungsten influx into the ITER containment of N W ∼5×10 18 W per medium size ELM of 0.75 MJ m - 2 and 0.25 ms time duration has been estimated. The radiation cooling power of P rad =150-300 MW due to such influx of tungsten is intolerable: it should cool the ITER core to 1 keV within a few seconds.
International Nuclear Information System (INIS)
Shin, Ho Cheol; Park, Moon Ghu; You, Skin
2006-01-01
Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression
DEFF Research Database (Denmark)
Quinonero, Joaquin; Girard, Agathe; Larsen, Jan
2003-01-01
The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...
Selection and properties of alternative forming fluids for TRISO fuel kernel production
Energy Technology Data Exchange (ETDEWEB)
Baker, M.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); King, J.C., E-mail: kingjc@mines.edu [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Gorman, B.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Marshall, D.W. [Idaho National Laboratory, 2525 N. Fremont Avenue, P.O. Box 1625, Idaho Falls, ID 83415 (United States)
2013-01-15
Highlights: Black-Right-Pointing-Pointer Forming fluid selection criteria developed for TRISO kernel production. Black-Right-Pointing-Pointer Ten candidates selected for further study. Black-Right-Pointing-Pointer Density, viscosity, and surface tension measured for first time. Black-Right-Pointing-Pointer Settling velocity and heat transfer rates calculated. Black-Right-Pointing-Pointer Three fluids recommended for kernel production testing. - Abstract: Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of {approx}10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 Degree-Sign C and 80 Degree-Sign C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory
Ngono Mbarga, M. C.; Bup Nde, D.; Mohagir, A.; Kapseu, C.; Elambo Nkeng, G.
2017-01-01
A neem tree growing abundantly in India as well as in some regions of Asia and Africa gives fruits whose kernels have about 40-50% oil. This oil has high therapeutic and cosmetic qualities and is recently projected to be an important raw material for the production of biodiesel. Its seed is harvested at high moisture contents, which leads tohigh post-harvest losses. In the paper, the sorption isotherms are determined by the static gravimetric method at 40, 50, and 60°C to establish a database useful in defining drying and storage conditions of neem kernels. Five different equations are validated for modeling the sorption isotherms of neem kernels. The properties of sorbed water, such as the monolayer moisture content, surface area of adsorbent, number of adsorbed monolayers, and the percent of bound water are also defined. The critical moisture content necessary for the safe storage of dried neem kernels is shown to range from 5 to 10% dry basis, which can be obtained at a relative humidity less than 65%. The isosteric heats of sorption at 5% moisture content are 7.40 and 22.5 kJ/kg for the adsorption and desorption processes, respectively. This work is the first, to the best of our knowledge, to give the important parameters necessary for drying and storage of neem kernels, a potential raw material for the production of oil to be used in pharmaceutics, cosmetics, and biodiesel manufacturing.
Nakanishi, Koichi; Kogure, Akinori; Deuchi, Keiji; Kuwana, Ritsuko; Takamatsu, Hiromu; Ito, Kiyoshi
2015-01-01
We previously developed a method for evaluating the heat resistance of microorganisms by measuring the transition temperature at which the coefficient of linear expansion of a cell changes. Here, we performed heat resistance measurements using a scanning probe microscope with a nano thermal analysis system. The microorganisms studied included six strains of the genus Bacillus or related genera, one strain each of the thermophilic obligate anaerobic bacterial genera Thermoanaerobacter and Moorella, two strains of heat-resistant mold, two strains of non-sporulating bacteria, and one strain of yeast. Both vegetative cells and spores were evaluated. The transition temperature at which the coefficient of linear expansion due to heating changed from a positive value to a negative value correlated strongly with the heat resistance of the microorganism as estimated from the D value. The microorganisms with greater heat resistance exhibited higher transition temperatures. There was also a strong negative correlation between the coefficient of linear expansion and heat resistance in bacteria and yeast, such that microorganisms with greater heat resistance showed lower coefficients of linear expansion. These findings suggest that our method could be useful for evaluating the heat resistance of microorganisms.
Aflatoxin contamination of developing corn kernels.
Amer, M A
2005-01-01
Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.
Analog forecasting with dynamics-adapted kernels
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
Nasim, Wajid; Amin, Asad; Fahad, Shah; Awais, Muhammad; Khan, Naeem; Mubeen, Muhammad; Wahid, Abdul; Turan, Veysel; Rehman, Muhammad Habibur; Ihsan, Muhammad Zahid; Ahmad, Shakeel; Hussain, Sajjad; Mian, Ishaq Ahmad; Khan, Bushra; Jamal, Yousaf
2018-06-01
Climate change has adverse effects at global, regional and local level. Heat wave events have serious contribution for global warming and natural hazards in Pakistan. Historical (1997-2015) heat wave were analyzed over different provinces (Punjab, Sindh and Baluchistan) of Pakistan to identify the maximum temperature trend. Heat accumulation in Pakistan were simulated by the General Circulation Model (GCM) combined with 3 GHG (Green House Gases) Representative Concentration Pathways (RCPs) (RCP-4.5, 6.0, and 8.5) by using SimCLIM model (statistical downscaling model for future trend projections). Heat accumulation was projected for year 2030, 2060, and 2090 for seasonal and annual analysis in Pakistan. Heat accumulation were projected to increase by the baseline year (1995) was represented in percentage change. Projection shows that Sindh and southern Punjab was mostly affected by heat accumulation. This study identified the rising trend of heat wave over the period (1997-2015) for Punjab, Sindh and Baluchistan (provinces of Pakistan), which identified that most of the meteorological stations in Punjab and Sindh are highly prone to heat waves. According to model projection; future trend of annual heat accumulation, in 2030 was increased 17%, 26%, and 32% but for 2060 the trends were reported by 54%, 49%, and 86% for 2090 showed highest upto 62%, 75%, and 140% for RCP-4.5, RCP-6.0, and RCP-8.5, respectively. While seasonal trends of heat accumulation were projected to maximum values for monsoon and followed by pre-monsoon and post monsoon. Heat accumulation in monsoon may affect the agricultural activities in the region under study.
Efficient Kernel-Based Ensemble Gaussian Mixture Filtering
Liu, Bo
2015-11-11
We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.
Chu, Weiqi; Li, Xiantao
2018-01-01
We present some estimates for the memory kernel function in the generalized Langevin equation, derived using the Mori-Zwanzig formalism from a one-dimensional lattice model, in which the particles interactions are through nearest and second nearest neighbors. The kernel function can be explicitly expressed in a matrix form. The analysis focuses on the decay properties, both spatially and temporally, revealing a power-law behavior in both cases. The dependence on the level of coarse-graining is also studied.
OS X and iOS Kernel Programming
Halvorsen, Ole Henry
2011-01-01
OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i
The Classification of Diabetes Mellitus Using Kernel k-means
Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.
2018-01-01
Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.
Object classification and detection with context kernel descriptors
DEFF Research Database (Denmark)
Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping
2014-01-01
Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...
A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.
Directory of Open Access Journals (Sweden)
Domonkos Tikk
Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study
Non-separable pairing interaction kernels applied to superconducting cuprates
International Nuclear Information System (INIS)
Haley, Stephen B.; Fink, Herman J.
2014-01-01
Highlights: • Non-separable interaction kernels with weak interactions produces HTS. • A probabilistic approach is used in filling the electronic states in the unit cell. • A set of coupled equations is derived which describes the energy gap. • SC properties of separable with non-separable interactions are compared. • There is agreement with measured properties of the SC and normal states. - Abstract: A pairing Hamiltonian H(Γ) with a non-separable interaction kernel Γ produces HTS for relatively weak interactions. The doping and temperature dependence of Γ(x,T) and the chemical potential μ(x) is determined by a probabilistic filling of the electronic states in the cuprate unit cell. A diverse set of HTS and normal state properties is examined, including the SC phase transition boundary T C (x), SC gap Δ(x,T), entropy S(x,T), specific heat C(x,T), and spin susceptibility χ s (x,T). Detailed x,T agreement with cuprate experiment is obtained for all properties
Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu
2017-12-15
Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.
Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates
International Nuclear Information System (INIS)
Hanft, J.M.; Jones, R.J.
1986-01-01
This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose
Alumina Concentration Detection Based on the Kernel Extreme Learning Machine.
Zhang, Sen; Zhang, Tao; Yin, Yixin; Xiao, Wendong
2017-09-01
The concentration of alumina in the electrolyte is of great significance during the production of aluminum. The amount of the alumina concentration may lead to unbalanced material distribution and low production efficiency and affect the stability of the aluminum reduction cell and current efficiency. The existing methods cannot meet the needs for online measurement because industrial aluminum electrolysis has the characteristics of high temperature, strong magnetic field, coupled parameters, and high nonlinearity. Currently, there are no sensors or equipment that can detect the alumina concentration on line. Most companies acquire the alumina concentration from the electrolyte samples which are analyzed through an X-ray fluorescence spectrometer. To solve the problem, the paper proposes a soft sensing model based on a kernel extreme learning machine algorithm that takes the kernel function into the extreme learning machine. K-fold cross validation is used to estimate the generalization error. The proposed soft sensing algorithm can detect alumina concentration by the electrical signals such as voltages and currents of the anode rods. The predicted results show that the proposed approach can give more accurate estimations of alumina concentration with faster learning speed compared with the other methods such as the basic ELM, BP, and SVM.
Fluidization calculation on nuclear fuel kernel coating
International Nuclear Information System (INIS)
Sukarsono; Wardaya; Indra-Suryawan
1996-01-01
The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3
Chen, Xingyuan; Miller, Gretchen R; Rubin, Yoram; Baldocchi, Dennis D
2012-12-01
The heat pulse method is widely used to measure water flux through plants; it works by using the speed at which a heat pulse is propagated through the system to infer the velocity of water through a porous medium. No systematic, non-destructive calibration procedure exists to determine the site-specific parameters necessary for calculating sap velocity, e.g., wood thermal diffusivity and probe spacing. Such parameter calibration is crucial to obtain the correct transpiration flux density from the sap flow measurements at the plant scale and subsequently to upscale tree-level water fluxes to canopy and landscape scales. The purpose of this study is to present a statistical framework for sampling and simultaneously estimating the tree's thermal diffusivity and probe spacing from in situ heat response curves collected by the implanted probes of a heat ratio measurement device. Conditioned on the time traces of wood temperature following a heat pulse, the parameters are inferred using a Bayesian inversion technique, based on the Markov chain Monte Carlo sampling method. The primary advantage of the proposed methodology is that it does not require knowledge of probe spacing or any further intrusive sampling of sapwood. The Bayesian framework also enables direct quantification of uncertainty in estimated sap flow velocity. Experiments using synthetic data show that repeated tests using the same apparatus are essential for obtaining reliable and accurate solutions. When applied to field conditions, these tests can be obtained in different seasons and can be automated using the existing data logging system. Empirical factors are introduced to account for the influence of non-ideal probe geometry on the estimation of heat pulse velocity, and are estimated in this study as well. The proposed methodology may be tested for its applicability to realistic field conditions, with an ultimate goal of calibrating heat ratio sap flow systems in practical applications.
Niazmardi, S.; Safari, A.; Homayouni, S.
2017-09-01
Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.
Feature Selection and Kernel Learning for Local Learning-Based Clustering.
Zeng, Hong; Cheung, Yiu-ming
2011-08-01
The performance of the most clustering algorithms highly relies on the representation of data in the input space or the Hilbert space of kernel methods. This paper is to obtain an appropriate data representation through feature selection or kernel learning within the framework of the Local Learning-Based Clustering (LLC) (Wu and Schölkopf 2006) method, which can outperform the global learning-based ones when dealing with the high-dimensional data lying on manifold. Specifically, we associate a weight to each feature or kernel and incorporate it into the built-in regularization of the LLC algorithm to take into account the relevance of each feature or kernel for the clustering. Accordingly, the weights are estimated iteratively in the clustering process. We show that the resulting weighted regularization with an additional constraint on the weights is equivalent to a known sparse-promoting penalty. Hence, the weights of those irrelevant features or kernels can be shrunk toward zero. Extensive experiments show the efficacy of the proposed methods on the benchmark data sets.
Comparative Analysis of Kernel Methods for Statistical Shape Learning
National Research Council Canada - National Science Library
Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen
2006-01-01
.... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...
Influence of differently processed mango seed kernel meal on ...
African Journals Online (AJOL)
Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.
On methods to increase the security of the Linux kernel
International Nuclear Information System (INIS)
Matvejchikov, I.V.
2014-01-01
Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru
Linear and kernel methods for multi- and hypervariate change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Canty, Morton J.
2010-01-01
. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...
Kernel methods in orthogonalization of multi- and hypervariate data
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...
CHARACTERISTIC SIZE OF FLARE KERNELS IN THE VISIBLE AND NEAR-INFRARED CONTINUA
International Nuclear Information System (INIS)
Xu, Yan; Jing, Ju; Wang, Haimin; Cao, Wenda
2012-01-01
In this Letter, we present a new approach to estimate the formation height of visible and near-infrared emission of an X10 flare. The sizes of flare emission cores in three wavelengths are accurately measured during the peak of the flare. The source size is the largest in the G band at 4308 Å and shrinks toward longer wavelengths, namely the green continuum at 5200 Å and NIR at 15600 Å, where the emission is believed to originate from the deeper atmosphere. This size-wavelength variation is likely explained by the direct heating model as electrons need to move along converging field lines from the corona to the photosphere. Therefore, one can observe the smallest source, which in our case is 0.''65 ± 0.''02 in the bottom layer (represented by NIR), and observe relatively larger kernels in upper layers of 1.''03 ± 0.''14 and 1.''96 ± 0.''27, using the green continuum and G band, respectively. We then compare the source sizes with a simple magnetic geometry to derive the formation height of the white-light sources and magnetic pressure in different layers inside the flare loop.
CHARACTERISTIC SIZE OF FLARE KERNELS IN THE VISIBLE AND NEAR-INFRARED CONTINUA
Energy Technology Data Exchange (ETDEWEB)
Xu, Yan; Jing, Ju; Wang, Haimin [Space Weather Research Lab, Center for Solar-Terrestrial Research, New Jersey Institute of Technology, 323 Martin Luther King Blvd, Newark, NJ 07102-1982 (United States); Cao, Wenda, E-mail: yx2@njit.edu [Big Bear Solar Observatory, Center for Solar-Terrestrial Research, New Jersey Institute of Technology, 323 Martin Luther King Blvd, Newark, NJ 07102-1982 (United States)
2012-05-01
In this Letter, we present a new approach to estimate the formation height of visible and near-infrared emission of an X10 flare. The sizes of flare emission cores in three wavelengths are accurately measured during the peak of the flare. The source size is the largest in the G band at 4308 A and shrinks toward longer wavelengths, namely the green continuum at 5200 A and NIR at 15600 A, where the emission is believed to originate from the deeper atmosphere. This size-wavelength variation is likely explained by the direct heating model as electrons need to move along converging field lines from the corona to the photosphere. Therefore, one can observe the smallest source, which in our case is 0.''65 {+-} 0.''02 in the bottom layer (represented by NIR), and observe relatively larger kernels in upper layers of 1.''03 {+-} 0.''14 and 1.''96 {+-} 0.''27, using the green continuum and G band, respectively. We then compare the source sizes with a simple magnetic geometry to derive the formation height of the white-light sources and magnetic pressure in different layers inside the flare loop.
Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen
2013-02-01
The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T(a)) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T(a) estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T(a) based on MODIS land surface temperature (LST) data. The verification results of maximum T(a), minimum T(a), GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001-2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale.
Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D
2011-12-01
Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.
Heat kernel for Newton-Cartan trace anomalies
Energy Technology Data Exchange (ETDEWEB)
Auzzi, Roberto [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore, Via Musei 41, Brescia, 25121 (Italy); INFN Sezione di Perugia, Via A. Pascoli, Perugia, 06123 (Italy); Nardelli, Giuseppe [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore, Via Musei 41, Brescia, 25121 (Italy); TIFPA - INFN, Università di Trento,c/o Dipartimento di Fisica, Povo, TN, 38123 (Italy)
2016-07-11
We compute the leading part of the trace anomaly for a free non-relativistic scalar in 2+1 dimensions coupled to a background Newton-Cartan metric. The anomaly is proportional to 1/m, where m is the mass of the scalar. We comment on the implications of a conjectured a-theorem for non-relativistic theories with boost invariance.
Generalized heat kernel coefficients for a new asymptotic expansion
International Nuclear Information System (INIS)
Osipov, Alexander A.; Hiller, Brigitte
2003-01-01
The method which allows for asymptotic expansion of the one-loop effective action W = lndetA is formulated. The positively defined elliptic operator A = U + M2 depends on the external classical fields taking values in the Lie algebra of the internal symmetry group G. Unlike the standard method of Schwinger - DeWitt, the more general case with the nongenerate mass matrix M = diag(m1, m2, ...) is considered. The first coefficients of the new asymptotic series are calculated and their relationship with the Seeley - DeWitt coefficients is clarified
One Point Isometric Matching with the Heat Kernel
Ovsjanikov, Maks
2010-09-21
A common operation in many geometry processing algorithms consists of finding correspondences between pairs of shapes by finding structure-preserving maps between them. A particularly useful case of such maps is isometries, which preserve geodesic distances between points on each shape. Although several algorithms have been proposed to find approximately isometric maps between a pair of shapes, the structure of the space of isometries is not well understood. In this paper, we show that under mild genericity conditions, a single correspondence can be used to recover an isometry defined on entire shapes, and thus the space of all isometries can be parameterized by one correspondence between a pair of points. Perhaps surprisingly, this result is general, and does not depend on the dimensionality or the genus, and is valid for compact manifolds in any dimension. Moreover, we show that both the initial correspondence and the isometry can be recovered efficiently in practice. This allows us to devise an algorithm to find intrinsic symmetries of shapes, match shapes undergoing isometric deformations, as well as match partial and incomplete models efficiently. Journal compilation © 2010 The Eurographics Association and Blackwell Publishing Ltd.
A class of kernel based real-time elastography algorithms.
Kibria, Md Golam; Hasan, Md Kamrul
2015-08-01
In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function
Relationship between attenuation coefficients and dose-spread kernels
International Nuclear Information System (INIS)
Boyer, A.L.
1988-01-01
Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
International Nuclear Information System (INIS)
Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric
2010-01-01
Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Bourras, Denis; Eymard, Laurence; Liu, W. Timothy; Dupuis, Hélène
2002-03-01
A new technique was developed to retrieve near-surface instantaneous air temperatures and turbulent sensible heat fluxes using satellite data during the Structure des Echanges Mer-Atmosphere, Proprietes des Heterogeneites Oceaniques: Recherche Experimentale (SEMAPHORE) experiment, which was conducted in 1993 under mainly anticyclonic conditions. The method is based on a regional, horizontal atmospheric temperature advection model whose inputs are wind vectors, sea surface temperature fields, air temperatures around the region under study, and several constants derived from in situ measurements. The intrinsic rms error of the method is 0.7°C in terms of air temperature and 9 W m2 for the fluxes, both at 0.16° × 0.16° and 1.125° × 1.125° resolution. The retrieved air temperature and flux horizontal structures are in good agreement with fields from two operational general circulation models. The application to SEMAPHORE data involves the First European Remote Sensing Satellite (ERS-1) wind fields, Advanced Very High Resolution Radiometer (AVHRR) SST fields, and European Centre for Medium-Range Weather Forecasts (ECMWF) air temperature boundary conditions. The rms errors obtained by comparing the estimations with research vessel measurements are 0.3°C and 5 W m2.
Energy Technology Data Exchange (ETDEWEB)
Goldman, Charles
2007-03-01
During 2005 and 2006, the PJM Interconnection (PJM) Load Analysis Subcommittee (LAS) examined ways to reduce the costs and improve the effectiveness of its existing measurement and verification (M&V) protocols for Direct Load Control (DLC) programs. The current M&V protocol requires that a PURPA-compliant Load Research study be conducted every five years for each Load-Serving Entity (LSE). The current M&V protocol is expensive to implement and administer particularly for mature load control programs, some of which are marginally cost-effective. There was growing evidence that some LSEs were mothballing or dropping their DLC programs in lieu of incurring the expense associated with the M&V. This project had several objectives: (1) examine the potential for developing deemed savings estimates acceptable to PJM for legacy air conditioning and water heating DLC programs, and (2) explore the development of a collaborative, regional, consensus-based approach for conducting monitoring and verification of load reductions for emerging load management technologies for customers that do not have interval metering capability.