SPLINE, Spline Interpolation Function
International Nuclear Information System (INIS)
Allouard, Y.
1977-01-01
1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10
[Multimodal medical image registration using cubic spline interpolation method].
He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan
2007-12-01
Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz
The EH Interpolation Spline and Its Approximation
Directory of Open Access Journals (Sweden)
Jin Xie
2014-01-01
Full Text Available A new interpolation spline with two parameters, called EH interpolation spline, is presented in this paper, which is the extension of the standard cubic Hermite interpolation spline, and inherits the same properties of the standard cubic Hermite interpolation spline. Given the fixed interpolation conditions, the shape of the proposed splines can be adjusted by changing the values of the parameters. Also, the introduced spline could approximate to the interpolated function better than the standard cubic Hermite interpolation spline and the quartic Hermite interpolation splines with single parameter by a new algorithm.
Knott, Gary D
2000-01-01
A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...
Positivity Preserving Interpolation Using Rational Bicubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2015-01-01
Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.
PSPLINE: Princeton Spline and Hermite cubic interpolation routines
McCune, Doug
2017-10-01
PSPLINE is a collection of Spline and Hermite interpolation tools for 1D, 2D, and 3D datasets on rectilinear grids. Spline routines give full control over boundary conditions, including periodic, 1st or 2nd derivative match, or divided difference-based boundary conditions on either end of each grid dimension. Hermite routines take the function value and derivatives at each grid point as input, giving back a representation of the function between grid points. Routines are provided for creating Hermite datasets, with appropriate boundary conditions applied. The 1D spline and Hermite routines are based on standard methods; the 2D and 3D spline or Hermite interpolation functions are constructed from 1D spline or Hermite interpolation functions in a straightforward manner. Spline and Hermite interpolation functions are often much faster to evaluate than other representations using e.g. Fourier series or otherwise involving transcendental functions.
Ahmad, Azhar; Azmi, Amirah; Majid, Ahmad Abd.; Hamid, Nur Nadiah Abd
2017-08-01
In this paper, Nonlinear Schrödinger (NLS) equation with Neumann boundary conditions is solved using finite difference method (FDM) and cubic B-spline interpolation method (CuBSIM). First, the approach is based on the FDM applied on the time and space discretization with the help of theta-weighted method. However, our main interest is the second approach, whereby FDM is applied on the time discretization and cubic B-spline is utilized as an interpolation function in the space dimension with the same help of theta-weighted method. The CuBSIM is shown to be stable by using von Neumann stability analysis. The proposed method is tested on a test problem with single soliton motion of the NLS equation. The accuracy of the numerical results is measured by the Euclidean-norm and infinity-norm. CuBSIM is found to produce more accurate results than the FDM.
Interpolation of natural cubic spline
Directory of Open Access Journals (Sweden)
Arun Kumar
1992-01-01
Full Text Available From the result in [1] it follows that there is a unique quadratic spline which bounds the same area as that of the function. The matching of the area for the cubic spline does not follow from the corresponding result proved in [2]. We obtain cubic splines which preserve the area of the function.
Spline interpolations besides wood model widely used in lactation
Korkmaz, Mehmet
2017-04-01
In this study, for lactation curve, spline interpolations, alternative modeling passing through exactly all data points with respect to widely used Wood model applied to lactation data were be discussed. These models are linear spline, quadratic spline and cubic spline. The observed and estimated values according to spline interpolations and Wood model were given with their Error Sum of Squares and also the lactation curves of spline interpolations and widely used Wood model were shown on the same graph. Thus, the differences have been observed. The estimates for some intermediate values were done by using spline interpolations and Wood model. By using spline interpolations, the estimates of intermediate values could be made more precise. Furthermore, by using spline interpolations, the predicted values for missing or incorrect observation were very successful according to the values of Wood model. By using spline interpolations, new ideas and interpretations in addition to the information of the well-known classical analysis were shown to the investigators.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
Some splines produced by smooth interpolation
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2018-01-01
Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub
Some splines produced by smooth interpolation
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2018-01-01
Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
2017-04-01
A novel on high-grade CNC machines tools for B Spline curve method of High-speed interpolation arithmetic is introduced. In the high-grade CNC machines tools CNC system existed the type value points is more trouble, the control precision is not strong and so on, In order to solve this problem. Through specific examples in matlab7.0 simulation result showed that that the interpolation error significantly reduced, the control precision is improved markedly, and satisfy the real-time interpolation of high speed, high accuracy requirements.
Shape Designing of Engineering Images Using Rational Spline Interpolation
Directory of Open Access Journals (Sweden)
Muhammad Sarfraz
2015-01-01
Full Text Available In modern days, engineers encounter a remarkable range of different engineering problems like study of structure, structure properties, and designing of different engineering images, for example, automotive images, aerospace industrial images, architectural designs, shipbuilding, and so forth. This paper purposes an interactive curve scheme for designing engineering images. The purposed scheme furnishes object designing not just in the area of engineering, but it is equally useful for other areas including image processing (IP, Computer Graphics (CG, Computer-Aided Engineering (CAE, Computer-Aided Manufacturing (CAM, and Computer-Aided Design (CAD. As a method, a piecewise rational cubic spline interpolant, with four shape parameters, has been purposed. The method provides effective results together with the effects of derivatives and shape parameters on the shape of the curves in a local and global manner. The spline method, due to its most generalized description, recovers various existing rational spline methods and serves as an alternative to various other methods including v-splines, gamma splines, weighted splines, and beta splines.
Ahmad, Azhar; Azmi, Amirah; Majid, Ahmad Abd.; Hamid, Nur Nadiah Abd
2017-04-01
In this paper, Nonlinear Schrödinger (NLS) equation with Neumann boundary conditions is solved using cubic B-spline interpolation method (CuBSIM) and finite difference method (FDM). Firstly, FDM is applied on the time discretization and cubic B-spline is utilized as an interpolation function in the space dimension with the help of theta-weighted method. The second approach is based on the FDM applied on the time and space discretization with the help of theta-weighted method. The CuBSIM is shown to be stable by using von Neumann stability analysis. The proposed method is tested on the interaction of the dual solitons of the NLS equation. The accuracy of the numerical results is measured by the Euclidean-norm and infinity-norm. CuBSIM is found to produce more accurate results than the FDM.
Interpolation in numerical optimization. [by cubic spline generation
Hall, K. R.; Hull, D. G.
1975-01-01
The present work discusses the generation of the cubic-spline interpolator in numerical optimization methods which use a variable-step integrator with step size control based on local relative truncation error. An algorithm for generating the cubic spline with successive over-relaxation is presented which represents an improvement over that given by Ralston and Wilf (1967). Rewriting the code reduces the number of N-vectors from eight to one. The algorithm is formulated in such a way that the solution of the linear system set up yields the first derivatives at the nodal points. This method is as accurate as other schemes but requires the minimum amount of storage.
Shape Preserving Interpolation Using C2 Rational Cubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2016-01-01
Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.
Directory of Open Access Journals (Sweden)
Shu-Cherng Fang
2010-08-01
Full Text Available We compare univariate L1 interpolating splines calculated on 5-point windows, on 7-point windows and on global data sets using four different spline functionals, namely, ones based on the second derivative, the first derivative, the function value and the antiderivative. Computational results indicate that second-derivative-based 5-point-window L1 splines preserve shape as well as or better than the other types of L1 splines. To calculate second-derivative-based 5-point-window L1 splines, we introduce an analysis-based, parallelizable algorithm. This algorithm is orders of magnitude faster than the previously widely used primal affine algorithm.
Monotonicity preserving splines using rational cubic Timmer interpolation
Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md
2017-08-01
In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.
Data interpolation using rational cubic Ball spline with three parameters
Karim, Samsul Ariffin Abdul
2016-11-01
Data interpolation is an important task for scientific visualization. This research introduces new rational cubic Ball spline scheme with three parameters. The rational cubic Ball will be used for data interpolation with or without true derivative values. Error estimation show that the proposed scheme works well and is a very good interpolant to approximate the function. All graphical examples are presented by using Mathematica software.
Analysis of ECT Synchronization Performance Based on Different Interpolation Methods
Directory of Open Access Journals (Sweden)
Yang Zhixin
2014-01-01
Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.
B-LUT: Fast and low memory B-spline image interpolation.
Sarrut, David; Vandemeulebroucke, Jef
2010-08-01
We propose a fast alternative to B-splines in image processing based on an approximate calculation using precomputed B-spline weights. During B-spline indirect transformation, these weights are efficiently retrieved in a nearest-neighbor fashion from a look-up table, greatly reducing overall computation time. Depending on the application, calculating a B-spline using a look-up table, called B-LUT, will result in an exact or approximate B-spline calculation. In case of the latter the obtained accuracy can be controlled by the user. The method is applicable to a wide range of B-spline applications and has very low memory requirements compared to other proposed accelerations. The performance of the proposed B-LUTs was compared to conventional B-splines as implemented in the popular ITK toolkit for the general case of image intensity interpolation. Experiments illustrated that highly accurate B-spline approximation can be obtained all while computation time is reduced with a factor of 5-6. The B-LUT source code, compatible with the ITK toolkit, has been made freely available to the community. 2009 Elsevier Ireland Ltd. All rights reserved.
Gravity Aided Navigation Precise Algorithm with Gauss Spline Interpolation
Directory of Open Access Journals (Sweden)
WEN Chaobin
2015-01-01
Full Text Available The gravity compensation of error equation thoroughly should be solved before the study on gravity aided navigation with high precision. A gravity aided navigation model construction algorithm based on research the algorithm to approximate local grid gravity anomaly filed with the 2D Gauss spline interpolation is proposed. Gravity disturbance vector, standard gravity value error and Eotvos effect are all compensated in this precision model. The experiment result shows that positioning accuracy is raised by 1 times, the attitude and velocity accuracy is raised by 1～2 times and the positional error is maintained from 100~200 m.
Comparison of interpolation and approximation methods for optical freeform synthesis
Voznesenskaya, Anna; Krizskiy, Pavel
2017-06-01
Interpolation and approximation methods for freeform surface synthesis are analyzed using the developed software tool. Special computer tool is developed and results of freeform surface modeling with piecewise linear interpolation, piecewise quadratic interpolation, cubic spline interpolation, Lagrange polynomial interpolation are considered. The most accurate interpolation method is recommended. Surface profiles are approximated with the square least method. The freeform systems are generated in optical design software.
Splines and variational methods
Prenter, P M
2008-01-01
One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension
Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation
Directory of Open Access Journals (Sweden)
Budi I. Setiawan
2007-09-01
Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.
Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models
Directory of Open Access Journals (Sweden)
Giao N. Pham
2018-02-01
Full Text Available With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure that all access is authorized, 3D printing models should be encrypted before being transmitted and stored. A novel perceptual encryption algorithm for 3D printing models for secure storage and transmission is presented in this paper. A facet of 3D printing model is extracted to interpolate a spline curve of degree 2 in three-dimensional space that is determined by three control points, the curvature coefficients of degree 2, and an interpolating vector. Three control points, the curvature coefficients, and interpolating vector of the spline curve of degree 2 are encrypted by a secret key. The encrypted features of the spline curve are then used to obtain the encrypted 3D printing model by inverse interpolation and geometric distortion. The results of experiments and evaluations prove that the entire 3D triangle model is altered and deformed after the perceptual encryption process. The proposed algorithm is responsive to the various formats of 3D printing models. The results of the perceptual encryption process is superior to those of previous methods. The proposed algorithm also provides a better method and more security than previous methods.
International Nuclear Information System (INIS)
Gao Wen-Wu; Wang Zhi-Gang
2014-01-01
Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions. (general)
Cubic spline interpolation of functions with high gradients in boundary layers
Blatov, I. A.; Zadorin, A. I.; Kitaeva, E. V.
2017-01-01
The cubic spline interpolation of grid functions with high-gradient regions is considered. Uniform meshes are proved to be inefficient for this purpose. In the case of widely applied piecewise uniform Shishkin meshes, asymptotically sharp two-sided error estimates are obtained in the class of functions with an exponential boundary layer. It is proved that the error estimates of traditional spline interpolation are not uniform with respect to a small parameter, and the error can increase indefinitely as the small parameter tends to zero, while the number of nodes N is fixed. A modified cubic interpolation spline is proposed, for which O((ln N/N)4) error estimates that are uniform with respect to the small parameter are obtained.
Kirkpatrick, J. C.
1976-01-01
A tabulation of selected altitude-correlated values of pressure, density, speed of sound, and coefficient of viscosity for each of six models of the atmosphere is presented in block data format. Interpolation for the desired atmospheric parameters is performed by using cubic spline functions. The recursive relations necessary to compute the cubic spline function coefficients are derived and implemented in subroutine form. Three companion subprograms, which form the preprocessor and processor, are also presented. These subprograms, together with the data element, compose the spline fit atmosphere package. Detailed FLOWGM flow charts and FORTRAN listings of the atmosphere package are presented in the appendix.
GA Based Rational cubic B-Spline Representation for Still Image Interpolation
Directory of Open Access Journals (Sweden)
Samreen Abbas
2016-12-01
Full Text Available In this paper, an image interpolation scheme is designed for 2D natural images. A local support rational cubic spline with control parameters, as interpolatory function, is being optimized using Genetic Algorithm (GA. GA is applied to determine the appropriate values of control parameter used in the description of rational cubic spline. Three state-of-the-art Image Quality Assessment (IQA models with traditional one are hired for comparison with existing image interpolation schemes and perceptual quality check of resulting images. The results show that the proposed scheme is better than the existing ones in comparison.
Chanthrasuwan, Maveeka; Asri, Nur Asreenawaty Mohd; Hamid, Nur Nadiah Abd; Majid, Ahmad Abd.; Azmi, Amirah
2017-08-01
The cubic B-spline and cubic trigonometric B-spline functions are used to set up the collocation in finding solutions for the Buckmaster equation. These splines are applied as interpolating functions in the spatial dimension while the finite difference method (FDM) is used to discretize the time derivative. The Buckmaster equation is linearized using Taylor's expansion and solved using two schemes, namely Crank-Nicolson and fully implicit. The von Neumann stability analysis is carried out on the two schemes and they are shown to be conditionally stable. In order to demonstrate the capability of the schemes, some problems are solved and compared with analytical and FDM solutions. The proposed methods are found to generate more accurate results than the FDM.
Directory of Open Access Journals (Sweden)
Shu-Cherng Fang
2010-07-01
Full Text Available We analytically investigate univariate C1 continuous cubic L1 interpolating splines calculated by minimizing an L1 spline functional based on the second derivative on 5-point windows. Specifically, we link geometric properties of the data points in the windows with linearity, convexity and oscillation properties of the resulting L1 spline. These analytical results provide the basis for a computationally efficient algorithm for calculation of L1 splines on 5-point windows.
Numerical Methods Using B-Splines
Shariff, Karim; Merriam, Marshal (Technical Monitor)
1997-01-01
The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.
quadratic spline finite element method
Directory of Open Access Journals (Sweden)
A. R. Bahadir
2002-01-01
Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.
Natural spline interpolation and exponential parameterization for length estimation of curves
Kozera, R.; Wilkołazka, M.
2017-07-01
This paper tackles the problem of estimating a length of a regular parameterized curve γ from an ordered sample of interpolation points in arbitrary Euclidean space by a natural spline. The corresponding tabular parameters are not given and are approximated by the so-called exponential parameterization (depending on λ ∈ [0, 1]). The respective convergence orders α(λ) for estimating length of γ are established for curves sampled more-or-less uniformly. The numerical experiments confirm a slow convergence orders α(λ) = 2 for all λ ∈ [0, 1) and a cubic order α(1) = 3 once natural spline is used.
Spline methods for conversation equations
International Nuclear Information System (INIS)
Bottcher, C.; Strayer, M.R.
1991-01-01
The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs
Csébfalvi, Balázs
2010-01-01
In this paper, we demonstrate that quasi-interpolation of orders two and four can be efficiently implemented on the Body-Centered Cubic (BCC) lattice by using tensor-product B-splines combined with appropriate discrete prefilters. Unlike the nonseparable box-spline reconstruction previously proposed for the BCC lattice, the prefiltered B-spline reconstruction can utilize the fast trilinear texture-fetching capability of the recent graphics cards. Therefore, it can be applied for rendering BCC-sampled volumetric data interactively. Furthermore, we show that a separable B-spline filter can suppress the postaliasing effect much more isotropically than a nonseparable box-spline filter of the same approximation power. Although prefilters that make the B-splines interpolating on the BCC lattice do not exist, we demonstrate that quasi-interpolating prefiltered linear and cubic B-spline reconstructions can still provide similar or higher image quality than the interpolating linear box-spline and prefiltered quintic box-spline reconstructions, respectively.
International Nuclear Information System (INIS)
Pohjola, J.; Turunen, J.; Lipping, T.
2009-07-01
In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)
Analysis of Spatial Interpolation in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2010-01-01
This paper analyses different types of spatial interpolation for the material-point method The interpolations include quadratic elements and cubic splines in addition to the standard linear shape functions usually applied. For the small-strain problem of a vibrating bar, the best results...... are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...... of field quantities The properties of different interpolation functions are analysed using numerical examples, including the classical cantil-evered beam problem....
Solving Dym equation using quartic B-spline and quartic trigonometric B-spline collocation methods
Anuar, Hanis Safirah Saiful; Mafazi, Nur Hidayah; Hamid, Nur Nadiah Abd; Majid, Ahmad Abd.; Azmi, Amirah
2017-08-01
The nonlinear Dym equation is solved numerically using the quartic B-spline (QuBS) and quartic trigonometric B-spline (QuTBS) collocation methods. The QuBS and QuTBS are utilized as interpolating functions in the spatial dimension while the finite difference method (FDM) is applied to discretize the temporal space with the help of theta-weighted method. The nonlinear term in the Dym equation is linearized using Taylor's expansion. Two schemes are performed on both methods which are Crank-Nicolson and fully implicit. Applying the Von-Neumann stability analysis, these schemes are found to be conditionally stable. Several numerical examples of different forms are discussed and compared in term of errors with exact solutions and results from the FDM.
A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation
Directory of Open Access Journals (Sweden)
Mingzhu Li
2014-01-01
Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.
Fuzzy Interpolation and Other Interpolation Methods Used in Robot Calibrations
Directory of Open Access Journals (Sweden)
Ying Bai
2012-01-01
Full Text Available A novel interpolation algorithm, fuzzy interpolation, is presented and compared with other popular interpolation methods widely implemented in industrial robots calibrations and manufacturing applications. Different interpolation algorithms have been developed, reported, and implemented in many industrial robot calibrations and manufacturing processes in recent years. Most of them are based on looking for the optimal interpolation trajectories based on some known values on given points around a workspace. However, it is rare to build an optimal interpolation results based on some random noises, and this is one of the most popular topics in industrial testing and measurement applications. The fuzzy interpolation algorithm (FIA reported in this paper provides a convenient and simple way to solve this problem and offers more accurate interpolation results based on given position or orientation errors that are randomly distributed in real time. This method can be implemented in many industrial applications, such as manipulators measurements and calibrations, industrial automations, and semiconductor manufacturing processes.
An enhanced splined saddle method
Ghasemi, S. Alireza; Goedecker, Stefan
2011-07-01
We present modifications for the method recently developed by Granot and Baer [J. Chem. Phys. 128, 184111 (2008)], 10.1063/1.2916716. These modifications significantly enhance the efficiency and reliability of the method. In addition, we discuss some specific features of this method. These features provide important flexibilities which are crucial for a double-ended saddle point search method in order to be applicable to complex reaction mechanisms. Furthermore, it is discussed under what circumstances this methods might fail to find the transition state and remedies to avoid such situations are provided. We demonstrate the performance of the enhanced splined saddle method on several examples with increasing complexity, isomerization of ammonia, ethane and cyclopropane molecules, tautomerization of cytosine, the ring opening of cyclobutene, the Stone-Wales transformation of the C60 fullerene, and finally rolling a small NaCl cube on NaCl(001) surface. All of these calculations are based on density functional theory. The efficiency of the method is remarkable in regard to the reduction of the total computational time.
Pfister, Nicolas; O'Neill, Norman T.; Aube, Martin; Nguyen, Minh-Nghia; Bechamp-Laganiere, Xavier; Besnier, Albert; Corriveau, Louis; Gasse, Geremie; Levert, Etienne; Plante, Danick
2005-08-01
Satellite-based measurements of aerosol optical depth (AOD) over land are obtained from an inversion procedure applied to dense dark vegetation pixels of remotely sensed images. The limited number of pixels over which the inversion procedure can be applied leaves many areas with little or no AOD data. Moreover, satellite coverage by sensors such as MODIS yields only daily images of a given region with four sequential overpasses required to straddle mid-latitude North America. Ground based AOD data from AERONET sun photometers are available on a more continuous basis but only at approximately fifty locations throughout North America. The object of this work is to produce a complete and coherent mapping of AOD over North America with a spatial resolution of 0.1 degree and a frequency of three hours by interpolating MODIS satellite-based data together with available AERONET ground based measurements. Before being interpolated, the MODIS AOD data extracted from different passes are synchronized to the mapping time using analyzed wind fields from the Global Multiscale Model (Meteorological Service of Canada). This approach amounts to a trajectory type of simplified atmospheric dynamics correction method. The spatial interpolation is performed using a weighted least squares method applied to bicubic B-spline functions defined on a rectangular grid. The least squares method enables one to weight the data accordingly to the measurement errors while the B-splines properties of local support and C2 continuity offer a good approximation of AOD behaviour viewed as a function of time and space.
Smoothing noisy spectroscopic data with many-knot spline method
Energy Technology Data Exchange (ETDEWEB)
Zhu, M.H. [Space Exploration Laboratory, Macau University of Science and Technology, Taipa, Macau (China)], E-mail: peter_zu@163.com; Liu, L.G.; Qi, D.X.; You, Z.; Xu, A.A. [Space Exploration Laboratory, Macau University of Science and Technology, Taipa, Macau (China)
2008-05-15
In this paper, we present the development of a many-knot spline method derived to remove the statistical noise in the spectroscopic data. This method is an expansion of the B-spline method. Compared to the B-spline method, the many-knot spline method is significantly faster.
Analysis of Interpolation Methods in the Image Reconstruction Tasks
Directory of Open Access Journals (Sweden)
V. T. Nguyen
2017-01-01
Full Text Available The article studies the interpolation methods used for image reconstruction. These methods were also implemented and tested with several images to estimate their effectiveness.The considered interpolation methods are a nearest-neighbor method, linear method, a cubic B-spline method, a cubic convolution method, and a Lanczos method. For each method were presented an interpolation kernel (interpolation function and a frequency response (Fourier transform.As a result of the experiment, the following conclusions were drawn:- the nearest neighbor algorithm is very simple and often used. With using this method, the reconstructed images contain artifacts (blurring and haloing;- the linear method is quickly and easily performed. It also reduces some visual distortion caused by changing image size. Despite the advantages using this method causes a large amount of interpolation artifacts, such as blurring and haloing;- cubic B-spline method provides smoothness of reconstructed images and eliminates apparent ramp phenomenon. But in the interpolation process a low-pass filter is used, and a high frequency component is suppressed. This will lead to fuzzy edge and false artificial traces;- cubic convolution method offers less distortion interpolation. But its algorithm is more complicated and more execution time is required as compared to the nearest-neighbor method and the linear method;- using the Lanczos method allows us to achieve a high-definition image. In spite of the great advantage the method requires more execution time as compared to the other methods of interpolation.The result obtained not only shows a comparison of the considered interpolation methods for various aspects, but also enables users to select an appropriate interpolation method for their applications.It is advisable to study further the existing methods and develop new ones using a number of methods
International Nuclear Information System (INIS)
Soycan, Arzu; Soycan, Metin
2009-01-01
GIS (Geographical Information System) is one of the most striking innovation for mapping applications supplied by the developing computer and software technology to users. GIS is a very effective tool which can show visually combination of the geographical and non-geographical data by recording these to allow interpretations and analysis. DEM (Digital Elevation Model) is an inalienable component of the GIS. The existing TM (Topographic Map) can be used as the main data source for generating DEM by amanual digitizing or vectorization process for the contours polylines. The aim of this study is to examine the DEM accuracies, which were obtained by TMs, as depending on the number of sampling points and grid size. For these purposes, the contours of the several 1/1000 scaled scanned topographical maps were vectorized. The different DEMs of relevant area have been created by using several datasets with different numbers of sampling points. We focused on the DEM creation from contour lines using gridding with RBF (Radial Basis Function) interpolation techniques, namely TPS as the surface fitting model. The solution algorithm and a short review of the mathematical model of TPS (Thin Plate Spline) interpolation techniques are given. In the test study, results of the application and the obtained accuracies are drawn and discussed. The initial object of this research is to discuss the requirement of DEM in GIS, urban planning, surveying engineering and the other applications with high accuracy (a few deci meters). (author)
Schwarz and multilevel methods for quadratic spline collocation
Energy Technology Data Exchange (ETDEWEB)
Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)
1994-12-31
Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.
Rahan, Nur Nadiah Mohd; Ishak, Siti Noor Shahira; Hamid, Nur Nadiah Abd; Majid, Ahmad Abd.; Azmi, Amirah
2017-04-01
In this research, the nonlinear Benjamin-Bona-Mahony (BBM) equation is solved numerically using the cubic B-spline (CuBS) and cubic trigonometric B-spline (CuTBS) collocation methods. The CuBS and CuTBS are utilized as interpolating functions in the spatial dimension while the standard finite difference method (FDM) is applied to discretize the temporal space. In order to solve the nonlinear problem, the BBM equation is linearized using Taylor's expansion. Applying the von-Neumann stability analysis, the proposed techniques are shown to be unconditionally stable under the Crank-Nicolson scheme. Several numerical examples are discussed and compared with exact solutions and results from the FDM.
Vibration Analysis of Suspension Cable with Attached Masses by Non-linear Spline Function Method
Directory of Open Access Journals (Sweden)
Qin Jian
2016-01-01
Full Text Available The nonlinear strain and stress expressions of suspension cable are established from the basic condition of suspension structure on the Lagrange coordinates and the equilibrium equation of the suspension structure is obtained. The dynamics equations of motion of the suspended cable with attached masses are proposed according to the virtual work principle. Using the spline function as interpolation functions of displacement and spatial position, the spline function method of dynamics equation of suspension cable is formed in which the stiffness matrix is expressed by spline function, and the solution method of stiffness matrix, matrix assembly method based on spline integral, is put forwards which can save cost time efficiency. The vibration frequency of the suspension cable is calculated with different attached masses, which provides theoretical basis for valuing of safety coefficient of the bearing cable of the cableway.
Directory of Open Access Journals (Sweden)
Ayanori Yorozu
2015-09-01
Full Text Available Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull–Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON.
Yorozu, Ayanori; Moriguchi, Toshiki; Takahashi, Masaki
2015-01-01
Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG) is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS) for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull–Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON). PMID:26404302
Kiani, M A; Sim, K S; Nia, M E; Tso, C P
2015-05-01
A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Spatial interpolation methods for monthly rainfalls and temperatures in Basilicata
Directory of Open Access Journals (Sweden)
Ferrara A
2008-12-01
Full Text Available Spatial interpolated climatic data on grids are important as input in forest modeling because climate spatial variability has a direct effect on productivity and forest growth. Maps of climatic variables can be obtained by different interpolation methods depending on data quality (number of station, spatial distribution, missed data etc. and topographic and climatic features of study area. In this paper four methods are compared to interpolate monthly rainfall at regional scale: 1 inverse distance weighting (IDW; 2 regularized spline with tension (RST; 3 ordinary kriging (OK; 4 universal kriging (UK. Besides, an approach to generate monthly surfaces of temperatures over regions of complex terrain and with limited number of stations is presented. Daily data were gathered from 1976 to 2006 period and then gaps in the time series were filled in order to obtain monthly mean temperatures and cumulative precipitation. Basic statistics of monthly dataset and analysis of relationship of temperature and precipitation to elevation were performed. A linear relationship was found between temperature and altitude, while no relationship was found between rainfall and elevation. Precipitations were then interpolated without taking into account elevation. Based on root mean squared error for each month the best method was ranked. Results showed that universal kriging (UK is the best method in spatial interpolation of rainfall in study area. Then cross validation was used to compare prediction performance of tree different variogram model (circular, spherical, exponential using UK algorithm in order to produce final maps of monthly precipitations. Before interpolating temperatures were referred to see level using the calculated lapse rate and a digital elevation model (DEM. The result of interpolation with RST was then set to originally elevation with an inverse procedure. To evaluate the quality of interpolated surfaces a comparison between interpolated and
BS Methods: A New Class of Spline Collocation BVMs
Mazzia, Francesca; Sestini, Alessandra; Trigiante, Donato
2008-09-01
BS methods are a recently introduced class of Boundary Value Methods which is based on B-splines. They can also be interpreted as spline collocation methods. For uniform meshes, the coefficients defining the k-step BS method are just the values of the (k+1)-degree uniform B-spline and B-spline derivative at its integer active knots; for general nonuniform meshes they are computed by solving local linear systems whose dimension depends on k. An important specific feature of BS methods is the possibility to associate a spline of degree k+1 and smoothness Ck to the numerical solution produced by the k-step method of this class. Such spline collocates the differential equation at the knots, shares the convergence order with the numerical solution, and can be computed with negligible additional computational cost. Here a survey on such methods is given, presenting the general definition, the convergence and stability features, and introducing the strategy for the computation of the coefficients in the B-spline basis which define the associated spline. Finally, some related numerical results are also presented.
Revisiting Veerman’s interpolation method
DEFF Research Database (Denmark)
Christiansen, Peter; Bay, Niels Oluf
2016-01-01
for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable....
Directory of Open Access Journals (Sweden)
Neng Wan
2014-01-01
Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.
The Diffraction Response Interpolation Method
DEFF Research Database (Denmark)
Jespersen, Søren Kragh; Wilhjelm, Jens Erik; Pedersen, Peder C.
1998-01-01
Computer modeling of the output voltage in a pulse-echo system is computationally very demanding, particularly whenconsidering reflector surfaces of arbitrary geometry. A new, efficient computational tool, the diffraction response interpolationmethod (DRIM), for modeling of reflectors in a fluid...... medium, is presented. The DRIM is based on the velocity potential impulseresponse method, adapted to pulse-echo applications by the use of acoustical reciprocity. Specifically, the DRIM operates bydividing the reflector surface into planar elements, finding the diffraction response at the corners...
Directory of Open Access Journals (Sweden)
Imtiaz Wasim
2018-01-01
Full Text Available In this study, we introduce a new numerical technique for solving nonlinear generalized Burgers-Fisher and Burgers-Huxley equations using hybrid B-spline collocation method. This technique is based on usual finite difference scheme and Crank-Nicolson method which are used to discretize the time derivative and spatial derivatives, respectively. Furthermore, hybrid B-spline function is utilized as interpolating functions in spatial dimension. The scheme is verified unconditionally stable using the Von Neumann (Fourier method. Several test problems are considered to check the accuracy of the proposed scheme. The numerical results are in good agreement with known exact solutions and the existing schemes in literature.
Fuzzy linguistic model for interpolation
International Nuclear Information System (INIS)
Abbasbandy, S.; Adabitabar Firozja, M.
2007-01-01
In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method
Evaluation of Teeth and Supporting Structures on Digital Radiograms using Interpolation Methods
Energy Technology Data Exchange (ETDEWEB)
Koh, Kwang Joon [Dept. of Oral and Maxillofacial Radiology, School of Dentistry and Institute of Oral Bio Science , Chonbuk National University, Chonju (Korea, Republic of); Chang, Kee Wan [Dept. of Preventive and Community Dentistry, School of Dentistry and Institute of Oral Bio Science, Chonbuk National University, Chonju (Korea, Republic of)
1999-02-15
To determine the effect of interpolation functions when processing the digital periapical images. The digital images were obtained by Digora and CDR system on the dry skull and human subject. 3 oral radiologists evaluated the 3 portions of each processed image using 7 interpolation methods and ROC curves were obtained by trapezoidal methods. The highest Az value(0.96) was obtained with cubic spline method and the lowest Az value(0.03) was obtained with facet model method in Digora system. The highest Az value(0.79) was obtained with gray segment expansion method and the lowest Az value(0.07) was obtained with facet model method in CDR system. There was significant difference of Az value in original image between Digora and CDR system at alpha=0.05 level. There were significant differences of Az values between Digora and CDR images with cubic spline method, facet model method, linear interpolation method and non-linear interpolation method at alpha= 0.1 level.
The use of splines to analyze scanning tunneling microscopy data
Wormeester, Herbert; Kip, Gerhardus A.M.; Sasse, A.G.B.M.; van Midden, H.J.P.
1990-01-01
Scanning tunneling microscopy (STM) requires a two‐dimensional (2D) image displaying technique for its interpretation. The flexibility and global approximation properties of splines, characteristic of a solid data reduction method as known from cubic spline interpolation, is called for. Splines were
Spline and spline wavelet methods with applications to signal and image processing
Averbuch, Amir Z; Zheludev, Valery A
This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...
Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang
2010-02-01
We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.
The interpolation damage detection method for frames under seismic excitation
Limongelli, M. P.
2011-10-01
In this paper a new procedure, addressed as Interpolation Damage Detecting Method (IDDM), is investigated as a possible mean for early detection and location of light damage in a structure struck by an earthquake. Damage is defined in terms of the accuracy of a spline function in interpolating the operational mode shapes (ODS) of the structure. At a certain location a decrease (statistically meaningful) of accuracy, with respect to a reference configuration, points out a localized variation of the operational shapes thus revealing the existence of damage. In this paper, the proposed method is applied to a numerical model of a multistory frame, simulating a damaged condition through a reduction of the story stiffness. Several damage scenarios have been considered and the results indicate the effectiveness of the method to assess and localize damage for the case of concentrated damage and for low to medium levels of noise in the recorded signals. The main advantage of the proposed algorithm is that it does not require a numerical model of the structure as well as an intense data post-processing or user interaction. The ODS are calculated from Frequency Response Functions hence responses recorded on the structure can be directly used without the need of modal identification. Furthermore, the local character of the feature chosen to detect damage makes the IDDM less sensitive to noise and to environmental changes with respect to other damage detection methods. For these reasons the IDDM appears as a valid option for automated post-earthquake damage assessment, able to provide after an earthquake, reliable information about the location of damage.
Energy Technology Data Exchange (ETDEWEB)
Hernandez, Andrew M. [Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States); Boone, John M., E-mail: john.boone@ucdmc.ucdavis.edu [Departments of Radiology and Biomedical Engineering, Biomedical Engineering Graduate Group, University of California Davis, Sacramento, California 95817 (United States)
2014-04-15
Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R{sup 2}) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9
Formation of Reflecting Surfaces Based on Spline Methods
Zamyatin, A. V.; Zamyatina, E. A.
2017-11-01
The article deals with problem of reflecting barriers surfaces generation by spline methods. The cases of reflection when a geometric model is applied are considered. The surfaces of reflecting barriers are formed in such a way that they contain given points and the rays reflected at these points and hit at the defined points of specified surface. The reflecting barrier surface is formed by cubic splines. It enables a comparatively simple implementation of proposed algorithms in the form of software applications. The algorithms developed in the article can be applied in architecture and construction design for reflecting surface generation in optics and acoustics providing the geometrical model of reflex processes is used correctly.
Solution of higher order boundary value problems by spline methods
Chaurasia, Anju; Srivastava, P. C.; Gupta, Yogesh
2017-10-01
Spline solution of Boundary Value Problems has received much attention in recent years. It has proven to be a powerful tool due to the ease of use and quality of results. This paper concerns with the survey of methods that try to approximate the solution of higher order BVPs using various spline functions. The purpose of this article is to thrash out the problems as well as conclusions, reached by the numerous authors in the related field. We critically assess many important relevant papers, published in reputed journals during last six years.
Hernandez, Andrew M; Boone, John M
2014-04-01
Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R(2)) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, "Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector," Phys. Med. Biol. 24, 505-517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 k
Comparative Analysis for Robust Penalized Spline Smoothing Methods
Directory of Open Access Journals (Sweden)
Bin Wang
2014-01-01
Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.
Systems and methods for interpolation-based dynamic programming
Rockwood, Alyn
2013-01-03
Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.
Selection of an Appropriate Interpolation Method for Rainfall Data In ...
African Journals Online (AJOL)
There are many interpolation methods in use with various limitations and likelihood of errors. This study applied five interpolation methods to existing rainfall data in central Nigeria to determine the most appropriate method that returned the best prediction of rainfall at an ungauged site. The methods include the inverse ...
Rainfall variation by geostatistical interpolation method
Directory of Open Access Journals (Sweden)
Glauber Epifanio Loureiro
2013-08-01
Full Text Available This article analyses the variation of rainfall in the Tocantins-Araguaia hydrographic region in the last two decades, based upon the rain gauge stations of the ANA (Brazilian National Water Agency HidroWeb database for the years 1983, 1993 and 2003. The information was systemized and treated with Hydrologic methods such as method of contour and interpolation for ordinary kriging. The treatment considered the consistency of the data, the density of the space distribution of the stations and the periods of study. The results demonstrated that the total volume of water precipitated annually did not change significantly in the 20 years analyzed. However, a significant variation occurred in its spatial distribution. By analyzing the isohyet it was shown that there is a displacement of the precipitation at Tocantins Baixo (TOB of approximately 10% of the total precipitated volume. This displacement can be caused by global change, by anthropogenic activities or by regional natural phenomena. However, this paper does not explore possible causes of the displacement.
Interpolation from Grid Lines: Linear, Transfinite and Weighted Method
DEFF Research Database (Denmark)
Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen
2017-01-01
When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid...... of transfinite method close to grid lines, and the stability of the linear method. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates for two data sets. Depending on the upsampling rate, we show significant difference in the performance of the three methods....... We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all relevant upsampling rates....
Cui, Zhongmin; Kolen, Michael J.
2009-01-01
This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…
[An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].
Xu, Yonghong; Gao, Shangce; Hao, Xiaofei
2016-04-01
Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.
Trivariate Local Lagrange Interpolation and Macro Elements of Arbitrary Smoothness
Matt, Michael Andreas
2012-01-01
Michael A. Matt constructs two trivariate local Lagrange interpolation methods which yield optimal approximation order and Cr macro-elements based on the Alfeld and the Worsey-Farin split of a tetrahedral partition. The first interpolation method is based on cubic C1 splines over type-4 cube partitions, for which numerical tests are given. The second is the first trivariate Lagrange interpolation method using C2 splines. It is based on arbitrary tetrahedral partitions using splines of degree nine. The author constructs trivariate macro-elements based on the Alfeld split, where each tetrahedron
Rufo, Montaña; Antolín, Alicia; Paniagua, Jesús M; Jiménez, Antonio
2018-04-01
A comparative study was made of three methods of interpolation - inverse distance weighting (IDW), spline and ordinary kriging - after optimization of their characteristic parameters. These interpolation methods were used to represent the electric field levels for three emission frequencies (774kHz, 900kHz, and 1107kHz) and for the electrical stimulation quotient, Q E , characteristic of complex electromagnetic environments. Measurements were made with a spectrum analyser in a village in the vicinity of medium-wave radio broadcasting antennas. The accuracy of the models was quantified by comparing their predictions with levels measured at the control points not used to generate the models. The results showed that optimizing the characteristic parameters of each interpolation method allows any of them to be used. However, the best results in terms of the regression coefficient between each model's predictions and the actual control point field measurements were for the IDW method. Copyright © 2018 Elsevier Inc. All rights reserved.
Image edges detection through B-Spline filters
International Nuclear Information System (INIS)
Mastropiero, D.G.
1997-01-01
B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab
Survey: interpolation methods for whole slide image processing.
Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T
2017-02-01
Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Point based interactive image segmentation using multiquadrics splines
Meena, Sachin; Duraisamy, Prakash; Palniappan, Kannappan; Seetharaman, Guna
2017-05-01
Multiquadrics (MQ) are radial basis spline function that can provide an efficient interpolation of data points located in a high dimensional space. MQ were developed by Hardy to approximate geographical surfaces and terrain modelling. In this paper we frame the task of interactive image segmentation as a semi-supervised interpolation where an interpolating function learned from the user provided seed points is used to predict the labels of unlabeled pixel and the spline function used in the semi-supervised interpolation is MQ. This semi-supervised interpolation framework has a nice closed form solution which along with the fact that MQ is a radial basis spline function lead to a very fast interactive image segmentation process. Quantitative and qualitative results on the standard datasets show that MQ outperforms other regression based methods, GEBS, Ridge Regression and Logistic Regression, and popular methods like Graph Cut,4 Random Walk and Random Forest.6
Energy Technology Data Exchange (ETDEWEB)
Javaid, Zarrar; Unsworth, Charles P., E-mail: c.unsworth@auckland.ac.nz [Department of Engineering Science, The University of Auckland, Auckland 1010 (New Zealand); Boocock, Mark G.; McNair, Peter J. [Health and Rehabilitation Research Center, Auckland University of Technology, Auckland 1142 (New Zealand)
2016-03-15
Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhances volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume
International Nuclear Information System (INIS)
Javaid, Zarrar; Unsworth, Charles P.; Boocock, Mark G.; McNair, Peter J.
2016-01-01
Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhances volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume
Weighted cubic and biharmonic splines
Kvasov, Boris; Kim, Tae-Wan
2017-01-01
In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.
Cubic B-spline solution for two-point boundary value problem with AOR iterative method
Suardi, M. N.; Radzuan, N. Z. F. M.; Sulaiman, J.
2017-09-01
In this study, the cubic B-spline approximation equation has been derived by using the cubic B-spline discretization scheme to solve two-point boundary value problems. In addition to that, system of cubic B-spline approximation equations is generated from this spline approximation equation in order to get the numerical solutions. To do this, the Accelerated Over Relaxation (AOR) iterative method has been used to solve the generated linear system. For the purpose of comparison, the GS iterative method is designated as a control method to compare between SOR and AOR iterative methods. There are two examples of proposed problems that have been considered to examine the efficiency of these proposed iterative methods via three parameters such as their number of iterations, computational time and maximum absolute error. The numerical results are obtained from these iterative methods, it can be concluded that the AOR iterative method is slightly efficient as compared with SOR iterative method.
Interpolation decoding method with variable parameters for fractal image compression
International Nuclear Information System (INIS)
He Chuanjiang; Li Gaoping; Shen Xiaona
2007-01-01
The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal
Numerical treatment of Hunter Saxton equation using cubic trigonometric B-spline collocation method
Hashmi, M. S.; Awais, Muhammad; Waheed, Ammarah; Ali, Qutab
2017-09-01
In this article, authors proposed a computational model based on cubic trigonometric B-spline collocation method to solve Hunter Saxton equation. The nonlinear second order partial differential equation arises in modeling of nematic liquid crystals and describes some aspects of orientation wave. The problem is decomposed into system of linear equations using cubic trigonometric B-spline collocation method with quasilinearization. To show the efficiency of the proposed method, two numerical examples have been tested for different values of t. The results are described using error tables and graphs and compared with the results existed in literature. It is evident that results are in good agreement with analytical solution and better than Arbabi, Nazari, and Davishi, Optik 127, 5255-5258 (2016). In current problem, it is also observed that the cubic trigonometric B-spline gives better results as compared to cubic B-spline.
The modal surface interpolation method for damage localization
Pina Limongelli, Maria
2017-05-01
The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).
Input point distribution for regular stem form spline modeling
Directory of Open Access Journals (Sweden)
Karel Kuželka
2015-04-01
Full Text Available Aim of study: To optimize an interpolation method and distribution of measured diameters to represent regular stem form of coniferous trees using a set of discrete points. Area of study: Central-Bohemian highlands, Czech Republic; a region that represents average stand conditions of production forests of Norway spruce (Picea abies [L.] Karst. in central Europe Material and methods: The accuracy of stem curves modeled using natural cubic splines from a set of measured diameters was evaluated for 85 closely measured stems of Norway spruce using five statistical indicators and compared to the accuracy of three additional models based on different spline types selected for their ability to represent stem curves. The optimal positions to measure diameters were identified using an aggregate objective function approach. Main results: The optimal positions of the input points vary depending on the properties of each spline type. If the optimal input points for each spline are used, then all spline types are able to give reasonable results with higher numbers of input points. The commonly used natural cubic spline was outperformed by other spline types. The lowest errors occur by interpolating the points using the Catmull-Rom spline, which gives accurate and unbiased volume estimates, even with only five input points. Research highlights: The study contributes to more accurate representation of stem form and therefore more accurate estimation of stem volume using data obtained from terrestrial imagery or other close-range remote sensing methods.
Comparison of interpolation methods for raster images scaling
Directory of Open Access Journals (Sweden)
Trubakov A.O.
2017-03-01
Full Text Available The article is devoted to the problem of efficient scaling of raster images. We consider some negative effects, related with scaling of raster images. Besides, we consider an analysis of several methods that are used to increase sizes of ras-ter images. Among them are nearest neighbor algorithm, bilinear interpolation, bicubic interpolation. We consider our research methodology, and then we tell about result of algorithms comparison. We use two criteria: quality of output images and performance of algorithms. Due to this research we can tell some recommendations on the choice of algo-rithms for increment of raster images. It is useful because there is no single universal algorithm for efficient solution to the problem.
Lyra, Gustavo Bastos; Correia, Tamíres Partelli; de Oliveira-Júnior, José Francisco; Zeri, Marcelo
2017-11-01
Five deterministic methods of spatial interpolation of monthly rainfall were compared over the state of Rio de Janeiro, southeast Brazil. The methods were the inverse distance weight (IDW), nearest neighbor (NRN), triangulation with linear interpolation (TLI), natural neighbor (NN), and spline tension (SPT). A set of 110 weather stations was used to test the methods. The selection of stations had two criteria: time series longer than 20 years and period of data from 1960 to 2009. The methods were evaluated using cross-validation, linear regression between values observed and interpolated, root mean square error (RMSE), coefficient of determination (r 2), coefficient of variation (CV, %), and the Willmott index of agreement (d). The results from different methods are influenced by the meteorological systems and their seasonality, as well as by the interaction with the topography. The methods presented higher precision (r 2) and accuracy (d, RMSE) during the summer and transition to autumn, in comparison with the winter or spring months. The SPT had the highest precision and accuracy in relation to other methods, in addition to having a good representation of the spatial patterns expected for rainfall over the complex terrain of the state and its high spatial variability.
Optimal interpolation method for intercomparison of atmospheric measurements.
Ridolfi, Marco; Ceccherini, Simone; Carli, Bruno
2006-04-01
Intercomparison of atmospheric measurements is often a difficult task because of the different spatial response functions of the experiments considered. We propose a new method for comparison of two atmospheric profiles characterized by averaging kernels with different vertical resolutions. The method minimizes the smoothing error induced by the differences in the averaging kernels by exploiting an optimal interpolation rule to map one profile into the retrieval grid of the other. Compared with the techniques published so far, this method permits one to retain the vertical resolution of the less-resolved profile involved in the intercomparison.
Evaluation of Nonlinear Methods for Interpolation of Catchment-Scale
Coleman, M. L.; Niemann, J. D.
2008-12-01
Soil moisture acts as a key state variable in interactions between the atmosphere and land surface, strongly influencing radiation and precipitation partitioning and thus many components of the hydrologic cycle. Despite its importance as a state variable, measuring soil moisture patterns with adequate spatial resolutions over useful spatial extents remains a significant challenge due to both physical and economic constraints. For this reason, ancillary data, such as topographic attributes, have been employed as process proxies and predictor variables for soil moisture. Most methods that have been used to estimate soil moisture from ancillary variables assume that soil moisture is linearly dependent on these variables. However, unsaturated zone water transport is typically modeled as a nonlinear function of the soil moisture state. While that fact does not necessarily imply nonlinear relationships with the ancillary variables, there is some evidence suggesting nonlinear methods may be more efficient than linear methods for interpolating soil moisture from ancillary data. Therefore, this work investigates the value of nonlinear estimation techniques, namely conditional density estimation, support vector machines, and a spatial artificial neural network, for interpolating soil moisture patterns from sparse measurements and ancillary data. The set of candidate predictor variables in this work includes simple and compound terrain attributes calculated from digital elevation models and, in some cases, soil texture data. The initial task in the interpolation procedure is the selection of the most effective predictor variables. Given the possibility of nonlinear relationships, mutual information is used to quantify relationships between candidate variables and soil moisture and ultimately to select the most efficient ancillary data as predictor variables. After selecting a subset of the potential ancillary data variables for use, the nonlinear estimation techniques are
Susanti, D.; Hartini, E.; Permana, A.
2017-01-01
Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.
Construction of Large Period Symplectic Maps by Interpolative Methods
Energy Technology Data Exchange (ETDEWEB)
Warnock, Robert; Cai, Yunhai; /SLAC; Ellison, James A.; /New Mexico U.
2009-12-17
The goal is to construct a symplectic evolution map for a large section of an accelerator, say a full turn of a large ring or a long wiggler. We start with an accurate tracking algorithm for single particles, which is allowed to be slightly non-symplectic. By tracking many particles for a distance S one acquires sufficient data to construct the mixed-variable generator of a symplectic map for evolution over S, given in terms of interpolatory functions. Two ways to find the generator are considered: (1) Find its gradient from tracking data, then the generator itself as a line integral. (2) Compute the action integral on many orbits. A test of method (1) has been made in a difficult example: a full turn map for an electron ring with strong nonlinearity near the dynamic aperture. The method succeeds at fairly large amplitudes, but there are technical difficulties near the dynamic aperture due to oddly shaped interpolation domains. For a generally applicable algorithm we propose method (2), realized with meshless interpolation methods.
3D Interpolation Method for CT Images of the Lung
Directory of Open Access Journals (Sweden)
Noriaki Asada
2003-06-01
Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.
Basis set approach in the constrained interpolation profile method
International Nuclear Information System (INIS)
Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.
2003-07-01
We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)
Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations
Energy Technology Data Exchange (ETDEWEB)
Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)
1996-12-31
In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).
An interpolation boundary treatment for the Lattice Boltzmann method
Deladisma, Marnico D.; Smith, Marc K.
2003-11-01
A new boundary condition for the Lattice Boltzmann method based on bounce-back and spatial interpolations is presented. The boundary condition allows for the placement of a boundary at any position between nodes and tracks the exact position of that boundary. Multi-dimensional interpolation of streaming and bounce-back particle distribution functions from surrounding boundary nodes is used to solve for new distribution values. This allows more information from surrounding nodes to be incorporated into the boundary treatment calculation. Calculations of flow within a 2D rotating annulus (with and without an obstacle placed in the flow) using the present boundary condition are compared with calculations done with the commercial CFD solver Fluent. Results show that the boundary condition is accurate and robust for these cases. The boundary condition also allows for moving boundaries and is easily extended to 3D, which facilitates the simulation of moving 3D particles. The new boundary condition will allow a Lattice Boltzmann simulation of a rotating wall vessel bioreactor with freely suspended tissue constructs whose length scale is about 1 cm.
He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han
2015-01-01
Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...
Michel, Volker
2013-01-01
Lectures on Constructive Approximation: Fourier, Spline, and Wavelet Methods on the Real Line, the Sphere, and the Ball focuses on spherical problems as they occur in the geosciences and medical imaging. It comprises the author’s lectures on classical approximation methods based on orthogonal polynomials and selected modern tools such as splines and wavelets. Methods for approximating functions on the real line are treated first, as they provide the foundations for the methods on the sphere and the ball and are useful for the analysis of time-dependent (spherical) problems. The author then examines the transfer of these spherical methods to problems on the ball, such as the modeling of the Earth’s or the brain’s interior. Specific topics covered include: * the advantages and disadvantages of Fourier, spline, and wavelet methods * theory and numerics of orthogonal polynomials on intervals, spheres, and balls * cubic splines and splines based on reproducing kernels * multiresolution analysis using wavelet...
Two Dimensional Complex Wavenumber Dispersion Analysis using B-Spline Finite Elements Method
Directory of Open Access Journals (Sweden)
Y. Mirbagheri
2016-01-01
Full Text Available Grid dispersion is one of the criteria of validating the finite element method (FEM in simulating acoustic or elastic wave propagation. The difficulty usually arisen when using this method for simulation of wave propagation problems, roots in the discontinuous field which causes the magnitude and the direction of the wave speed vector, to vary from one element to the adjacent one. To solve this problem and improve the response accuracy, two approaches are usually suggested: changing the integration method and changing shape functions. The Finite Element iso-geometric analysis (IGA is used in this research. In the IGA, the B-spline or non-uniform rational B-spline (NURBS functions are used which improve the response accuracy, especially in one-dimensional structural dynamics problems. At the boundary of two adjacent elements, the degree of continuity of the shape functions used in IGA can be higher than zero. In this research, for the first time, a two dimensional grid dispersion analysis has been used for wave propagation in plane strain problems using B-spline FEM is presented. Results indicate that, for the same degree of freedom, the grid dispersion of B-spline FEM is about half of the grid dispersion of the classic FEM.
Directory of Open Access Journals (Sweden)
Mathieu Lepot
2017-10-01
Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.
Wind Resource Mapping Using Landscape Roughness and Spatial Interpolation Methods
Directory of Open Access Journals (Sweden)
Samuel Van Ackere
2015-08-01
Full Text Available Energy saving, reduction of greenhouse gasses and increased use of renewables are key policies to achieve the European 2020 targets. In particular, distributed renewable energy sources, integrated with spatial planning, require novel methods to optimise supply and demand. In contrast with large scale wind turbines, small and medium wind turbines (SMWTs have a less extensive impact on the use of space and the power system, nevertheless, a significant spatial footprint is still present and the need for good spatial planning is a necessity. To optimise the location of SMWTs, detailed knowledge of the spatial distribution of the average wind speed is essential, hence, in this article, wind measurements and roughness maps were used to create a reliable annual mean wind speed map of Flanders at 10 m above the Earth’s surface. Via roughness transformation, the surface wind speed measurements were converted into meso- and macroscale wind data. The data were further processed by using seven different spatial interpolation methods in order to develop regional wind resource maps. Based on statistical analysis, it was found that the transformation into mesoscale wind, in combination with Simple Kriging, was the most adequate method to create reliable maps for decision-making on optimal production sites for SMWTs in Flanders (Belgium.
Smoothing quadratic and cubic splines
Oukropcová, Kateřina
2014-01-01
Title: Smoothing quadratic and cubic splines Author: Kateřina Oukropcová Department: Department of Numerical Mathematics Supervisor: RNDr. Václav Kučera, Ph.D., Department of Numerical Mathematics Abstract: The aim of this bachelor thesis is to study the topic of smoothing quadratic and cubic splines on uniform partitions. First, we define the basic con- cepts in the field of splines, next we introduce interpolating splines with a focus on their minimizing properties for odd degree and quadra...
Cyclic reduction and FACR methods for piecewise hermite bicubic orthogonal spline collocation
Bialecki, Bernard
1994-09-01
Cyclic reduction and Fourier analysis-cyclic reduction (FACR) methods are presented for the solution of the linear systems which arise when orthogonal spline collocation with piecewise Hermite bicubics is applied to boundary value problems for certain separable partial differential equations on a rectangle. On anN×N uniform partition, the cyclic reduction and Fourier analysis-cyclic reduction methods requireO(N2log2N) andO(N2log2log2N) arithmetic operations, respectively.
Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method
Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.
1997-01-01
A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.
International Nuclear Information System (INIS)
Mittal, R.C.; Rohila, Rajni
2016-01-01
In this paper, we have applied modified cubic B-spline based differential quadrature method to get numerical solutions of one dimensional reaction-diffusion systems such as linear reaction-diffusion system, Brusselator system, Isothermal system and Gray-Scott system. The models represented by these systems have important applications in different areas of science and engineering. The most striking and interesting part of the work is the solution patterns obtained for Gray Scott model, reminiscent of which are often seen in nature. We have used cubic B-spline functions for space discretization to get a system of ordinary differential equations. This system of ODE’s is solved by highly stable SSP-RK43 method to get solution at the knots. The computed results are very accurate and shown to be better than those available in the literature. Method is easy and simple to apply and gives solutions with less computational efforts.
Lepot, M.J.; Aubin, Jean Baptiste; Clemens, F.H.L.R.
2017-01-01
A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are
A Novel Method for Gearbox Fault Detection Based on Biorthogonal B-spline Wavelet
Directory of Open Access Journals (Sweden)
Guangbin ZHANG
2011-10-01
Full Text Available Localized defects of gearbox tend to result in periodic impulses in the vibration signal, which contain important information for system dynamics analysis. So parameter identification of impulse provides an effective approach for gearbox fault diagnosis. Biorthogonal B-spline wavelet has the properties of compact support, high vanishing moment and symmetry, which are suitable to signal de-noising, fast calculation, and reconstruction. Thus, a novel time frequency distribution method is present for gear fault diagnosis by biorthogonal B-spline wavelet. Simulation study concerning singularity signal shows that this wavelet is effective in identifying the fault feature with coefficients map and coefficients line. Furthermore, an integrated approach consisting of wavelet decomposition, Hilbert transform and power spectrum density is used in applications. The results indicate that this method can extract the gearbox fault characteristics and diagnose the fault patterns effectively.
Li, Xinxiu
2012-10-01
Physical processes with memory and hereditary properties can be best described by fractional differential equations due to the memory effect of fractional derivatives. For that reason reliable and efficient techniques for the solution of fractional differential equations are needed. Our aim is to generalize the wavelet collocation method to fractional differential equations using cubic B-spline wavelet. Analytical expressions of fractional derivatives in Caputo sense for cubic B-spline functions are presented. The main characteristic of the approach is that it converts such problems into a system of algebraic equations which is suitable for computer programming. It not only simplifies the problem but also speeds up the computation. Numerical results demonstrate the validity and applicability of the method to solve fractional differential equation.
Quantitative analysis of the reconstruction performance of interpolants
Lansing, Donald L.; Park, Stephen K.
1987-01-01
The analysis presented provides a quantitative measure of the reconstruction or interpolation performance of linear, shift-invariant interpolants. The performance criterion is the mean square error of the difference between the sampled and reconstructed functions. The analysis is applicable to reconstruction algorithms used in image processing and to many types of splines used in numerical analysis and computer graphics. When formulated in the frequency domain, the mean square error clearly separates the contribution of the interpolation method from the contribution of the sampled data. The equations provide a rational basis for selecting an optimal interpolant; that is, one which minimizes the mean square error. The analysis has been applied to a selection of frequently used data splines and reconstruction algorithms: parametric cubic and quintic Hermite splines, exponential and nu splines (including the special case of the cubic spline), parametric cubic convolution, Keys' fourth-order cubic, and a cubic with a discontinuous first derivative. The emphasis in this paper is on the image-dependent case in which no a priori knowledge of the frequency spectrum of the sampled function is assumed.
MKSOR iterative method with cubic b-spline approximation for ...
African Journals Online (AJOL)
Seidel (GS), Successive Over Relaxation (SOR) and Modified Kaudd Successive Over Relaxation (MKSOR) used to solve the generated systems of linear equations. For the purpose of comparison, the GS iterative method has been designated ...
International Nuclear Information System (INIS)
Müller, Kerstin; Schwemmer, Chris; Hornegger, Joachim; Zheng Yefeng; Wang Yang; Lauritsch, Günter; Rohkohl, Christopher; Maier, Andreas K.; Schultz, Carl; Fahrig, Rebecca
2013-01-01
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all
A quadratic spline maximum entropy method for the computation of invariant densities
Directory of Open Access Journals (Sweden)
DING Jiu
2015-06-01
Full Text Available The numerical recovery of an invariant density of the Frobenius-Perron operator corresponding to a nonsingular transformation is depicted by using quadratic spline functions. We implement a maximum entropy method to approximate the invariant density. The proposed method removes the ill-conditioning in the maximum entropy method, which arises by the use of polynomials. Due to the smoothness of the functions and a good convergence rate, the accuracy in the numerical calculation increases rapidly as the number of moment functions increases. The numerical results from the proposed method are supported by the theoretical analysis.
Steady-state solution of the PTC thermistor problem using a quadratic spline finite element method
Directory of Open Access Journals (Sweden)
Bahadir A. R.
2002-01-01
Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.
STUDY OF BLOCKING EFFECT ELIMINATION METHODS BY MEANS OF INTRAFRAME VIDEO SEQUENCE INTERPOLATION
Directory of Open Access Journals (Sweden)
I. S. Rubina
2015-01-01
Full Text Available The paper deals with image interpolation methods and their applicability to eliminate some of the artifacts related to both the dynamic properties of objects in video sequences and algorithms used in the order of encoding steps. The main drawback of existing methods is the high computational complexity, unacceptable in video processing. Interpolation of signal samples for blocking - effect elimination at the output of the convertion encoding is proposed as a part of the study. It was necessary to develop methods for improvement of compression ratio and quality of the reconstructed video data by blocking effect elimination on the borders of the segments by intraframe interpolating of video sequence segments. The main point of developed methods is an adaptive recursive algorithm application with adaptive-sized interpolation kernel both with and without the brightness gradient consideration at the boundaries of objects and video sequence blocks. Within theoretical part of the research, methods of information theory (RD-theory and data redundancy elimination, methods of pattern recognition and digital signal processing, as well as methods of probability theory are used. Within experimental part of the research, software implementation of compression algorithms with subsequent comparison of the implemented algorithms with the existing ones was carried out. Proposed methods were compared with the simple averaging algorithm and the adaptive algorithm of central counting interpolation. The advantage of the algorithm based on the adaptive kernel size selection interpolation is in compression ratio increasing by 30%, and the advantage of the modified algorithm based on the adaptive interpolation kernel size selection is in the compression ratio increasing by 35% in comparison with existing algorithms, interpolation and quality of the reconstructed video sequence improving by 3% compared to the one compressed without interpolation. The findings will be
International Nuclear Information System (INIS)
Sims, C.S.; Killough, G.G.
1983-01-01
Various segments of the health physics community advocate the use of different sets of neutron fluence-to-dose equivalent conversion factors as a function of energy and different methods of interpolation between discrete points in those data sets. The major data sets and interpolation methods are used to calculate the spectrum average fluence-to-dose equivalent conversion factors for five spectra associated with the various shielded conditions of the Health Physics Research Reactor. The results obtained by use of the different data sets and interpolation methods are compared and discussed. (author)
Interpolation method for the transport theory and its application in fusion-neutronics analysis
International Nuclear Information System (INIS)
Jung, J.
1981-09-01
This report presents an interpolation method for the solution of the Boltzmann transport equation. The method is based on a flux synthesis technique using two reference-point solutions. The equation for the interpolated solution results in a Volterra integral equation which is proved to have a unique solution. As an application of the present method, tritium breeding ratio is calculated for a typical D-T fusion reactor system. The result is compared to that of a variational technique
Complex wavenumber Fourier analysis of the B-spline based finite element method
Czech Academy of Sciences Publication Activity Database
Kolman, Radek; Plešek, Jiří; Okrouhlík, Miloslav
2014-01-01
Roč. 51, č. 2 (2014), s. 348-359 ISSN 0165-2125 R&D Projects: GA ČR(CZ) GAP101/11/0288; GA ČR(CZ) GAP101/12/2315; GA ČR GPP101/10/P376; GA ČR GA101/09/1630 Institutional support: RVO:61388998 Keywords : elastic wave propagation * dispersion errors * B-spline * finite element method * isogeometric analysis Subject RIV: JR - Other Machinery Impact factor: 1.513, year: 2014 http://www.sciencedirect.com/science/article/pii/S0165212513001479
Pseudo-cubic thin-plate type Spline method for analyzing experimental data
International Nuclear Information System (INIS)
Crecy, F. de.
1993-01-01
A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs
Reinhardt, Katja; Samimi, Cyrus
2018-01-01
While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and
Interpolation methods for creating a scatter radiation exposure map
Energy Technology Data Exchange (ETDEWEB)
Gonçalves, Elicardo A. de S., E-mail: elicardo.goncalves@ifrj.edu.br [Instituto Federal do Rio de Janeiro (IFRJ), Paracambi, RJ (Brazil); Gomes, Celio S.; Lopes, Ricardo T. [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F. [Universidade do Estado do Rio de Janeiro (UFRJ), RJ (Brazil). Instituto de Física
2017-07-01
A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)
Interpolation methods for creating a scatter radiation exposure map
International Nuclear Information System (INIS)
Gonçalves, Elicardo A. de S.; Gomes, Celio S.; Lopes, Ricardo T.; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F.
2017-01-01
A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)
Fang, Ming; Bowin, Carl
1992-01-01
To construct Venus' gravity disturbance field (or gravity anomaly) with the spacecraft-observer line of site (LOS) acceleration perturbation data, both a global and a local approach can be used. The global approach, e.g., spherical harmonic coefficients, and the local approach, e.g., the integral operator method, based on geodetic techniques are generally not the same, so that they must be used separately for mapping long wavelength features and short wavelength features. Harmonic spline, as an interpolation and extrapolation technique, is intrinsically flexible to both global and local mapping of a potential field. Theoretically, it preserves the information of the potential field up to the bound by sampling theorem regardless of whether it is global or local mapping, and is never bothered with truncation errors. The improvement of harmonic spline methodology for global mapping is reported. New basis functions, a singular value decomposition (SVD) based modification to Parker & Shure's numerical procedure, and preliminary results are presented.
A spectral/B-spline method for the Navier-Stokes equations in unbounded domains
International Nuclear Information System (INIS)
Dufresne, L.; Dumas, G.
2003-01-01
The numerical method presented in this paper aims at solving the incompressible Navier-Stokes equations in unbounded domains. The problem is formulated in cylindrical coordinates and the method is based on a Galerkin approximation scheme that makes use of vector expansions that exactly satisfy the continuity constraint. More specifically, the divergence-free basis vector functions are constructed with Fourier expansions in the θ and z directions while mapped B-splines are used in the semi-infinite radial direction. Special care has been taken to account for the particular analytical behaviors at both end points r=0 and r→∞. A modal reduction algorithm has also been implemented in the azimuthal direction, allowing for a relaxation of the CFL constraint on the timestep size and a possibly significant reduction of the number of DOF. The time marching is carried out using a mixed quasi-third order scheme. Besides the advantages of a divergence-free formulation and a quasi-spectral convergence, the local character of the B-splines allows for a great flexibility in node positioning while keeping narrow bandwidth matrices. Numerical tests show that the present method compares advantageously with other similar methodologies using purely global expansions
A Parallel Strategy for High-speed Interpolation of CNC Using Data Space Constraint Method
Directory of Open Access Journals (Sweden)
Shuan-qiang Yang
2013-12-01
Full Text Available A high-speed interpolation scheme using parallel computing is proposed in this paper. The interpolation method is divided into two tasks, namely, the rough task executing in PC and the fine task in the I/O card. During the interpolation procedure, the double buffers are constructed to exchange the interpolation data between the two tasks. Then, the data space constraint method is adapted to ensure the reliable and continuous data communication between the two buffers. Therefore, the proposed scheme can be realized in the common distribution of the operation systems without real-time performance. The high-speed and high-precision motion control can be achieved as well. Finally, an experiment is conducted on the self-developed CNC platform, the test results are shown to verify the proposed method.
Efficient charge assignment and back interpolation in multigrid methods for molecular dynamics.
Banerjee, Sanjay; Board, John A
2005-07-15
The assignment of atomic charges to a regular computational grid and the interpolation of forces from the grid back to the original atomic positions are crucial steps in a multigrid approach to the calculation of molecular forces. For purposes of grid assignment, atomic charges are modeled as truncated Gaussian distributions. The charge assignment and back interpolation methods are currently bottlenecks, and take up to one-third the execution time of the multigrid method each. Here, we propose alternative approaches to both charge assignment and back interpolation where convolution is used both to map Gaussian representations of atomic charges onto the grid and to map the forces computed at grid points back to atomic positions. These approaches achieve the same force accuracy with reduced run time. The proposed charge assignment and back interpolation methods scale better than baseline multigrid computations with both problem size and number of processors. (c) 2005 Wiley Periodicals, Inc.
The estimation of time-varying risks in asset pricing modelling using B-Spline method
Nurjannah; Solimun; Rinaldo, Adji
2017-12-01
Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.
Directory of Open Access Journals (Sweden)
Qing He
2018-01-01
Full Text Available In this paper, the particle size distribution is reconstructed using finite moments based on a converted spline-based method, in which the number of linear system of equations to be solved reduced from 4m × 4m to (m + 3 × (m + 3 for (m + 1 nodes by using cubic spline compared to the original method. The results are verified by comparing with the reference firstly. Then coupling with the Taylor-series expansion moment method, the evolution of particle size distribution undergoing Brownian coagulation and its asymptotic behavior are investigated.
Energy Technology Data Exchange (ETDEWEB)
Zainudin, Mohd Lutfi, E-mail: mdlutfi07@gmail.com [School of Quantitative Sciences, UUMCAS, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia); Institut Matematik Kejuruteraan (IMK), Universiti Malaysia Perlis, 02600 Arau, Perlis (Malaysia); Saaban, Azizan, E-mail: azizan.s@uum.edu.my [School of Quantitative Sciences, UUMCAS, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia); Bakar, Mohd Nazari Abu, E-mail: mohdnazari@perlis.uitm.edu.my [Faculty of Applied Science, Universiti Teknologi Mara, 02600 Arau, Perlis (Malaysia)
2015-12-11
The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.
On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids
Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.
2017-03-01
The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.
Limit Stress Spline Models for GRP Composites | Ihueze | Nigerian ...
African Journals Online (AJOL)
Spline functions were established on the assumption of three intervals and fitting of quadratic and cubic splines to critical stress-strain responses data. Quadratic ... of data points. Spline model is therefore recommended as it evaluates the function at subintervals, eliminating the error associated with wide range interpolation.
B-spline based finite element method in one-dimensional discontinuous elastic wave propagation
Czech Academy of Sciences Publication Activity Database
Kolman, Radek; Okrouhlík, Miloslav; Berezovski, A.; Gabriel, Dušan; Kopačka, Ján; Plešek, Jiří
2017-01-01
Roč. 46, June (2017), s. 382-395 ISSN 0307-904X R&D Projects: GA ČR(CZ) GAP101/12/2315; GA MŠk(CZ) EF15_003/0000493 Grant - others:AV ČR(CZ) DAAD-16-12; AV ČR(CZ) ETA-15-03 Program:Bilaterální spolupráce; Bilaterální spolupráce Institutional support: RVO:61388998 Keywords : discontinuous elastic wave propagation * B-spline finite element method * isogeometric analysis * implicit and explicit time integration * dispersion * spurious oscillations Subject RIV: BI - Acoustics OBOR OECD: Acoustics Impact factor: 2.350, year: 2016 http://www.sciencedirect.com/science/article/pii/S0307904X17300835
Directory of Open Access Journals (Sweden)
Wei Zeng
2015-04-01
Full Text Available Conventional splines offer powerful means for modeling surfaces and volumes in three-dimensional Euclidean space. A one-dimensional quaternion spline has been applied for animation purpose, where the splines are defined to model a one-dimensional submanifold in the three-dimensional Lie group. Given two surfaces, all of the diffeomorphisms between them form an infinite dimensional manifold, the so-called diffeomorphism space. In this work, we propose a novel scheme to model finite dimensional submanifolds in the diffeomorphism space by generalizing conventional splines. According to quasiconformal geometry theorem, each diffeomorphism determines a Beltrami differential on the source surface. Inversely, the diffeomorphism is determined by its Beltrami differential with normalization conditions. Therefore, the diffeomorphism space has one-to-one correspondence to the space of a special differential form. The convex combination of Beltrami differentials is still a Beltrami differential. Therefore, the conventional spline scheme can be generalized to the Beltrami differential space and, consequently, to the diffeomorphism space. Our experiments demonstrate the efficiency and efficacy of diffeomorphism splines. The diffeomorphism spline has many potential applications, such as surface registration, tracking and animation.
An Energy Conservative Ray-Tracing Method With a Time Interpolation of the Force Field
Energy Technology Data Exchange (ETDEWEB)
Yao, Jin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-02-10
A new algorithm that constructs a continuous force field interpolated in time is proposed for resolving existing difficulties in numerical methods for ray-tracing. This new method has improved accuracy, but with the same degree of algebraic complexity compared to Kaisers method.
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
On analysis-based two-step interpolation methods for randomly sampled seismic data
Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao
2013-02-01
Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.
A study of interpolation method in diagnosis of carpal tunnel syndrome
Directory of Open Access Journals (Sweden)
Alireza Ashraf
2013-01-01
Full Text Available Context: The low correlation between the patients′ signs and symptoms of carpal tunnel syndrome (CTS and results of electrodiagnostic tests makes the diagnosis challenging in mild cases. Interpolation is a mathematical method for finding median nerve conduction velocity (NCV exactly at carpal tunnel site. Therefore, it may be helpful in diagnosis of CTS in patients with equivocal test results. Aim: The aim of this study is to evaluate interpolation method as a CTS diagnostic test. Settings and Design: Patients with two or more clinical symptoms and signs of CTS in a median nerve territory with 3.5 ms ≤ distal median sensory latency <4.6 ms from those who came to our electrodiagnostic clinics and also, age matched healthy control subjects were recruited in the study. Materials and Methods: Median compound motor action potential and median sensory nerve action potential latencies were measured by a MEDLEC SYNERGY VIASIS electromyography and conduction velocities were calculated by both routine method and interpolation technique. Statistical Analysis Used: Chi-square and Student′s t-test were used for comparing group differences. Cut-off points were calculated using receiver operating characteristic curve. Results: A sensitivity of 88%, specificity of 67%, positive predictive value (PPV and negative predictive value (NPV of 70.8% and 84.7% were obtained for median motor NCV and a sensitivity of 98.3%, specificity of 91.7%, PPV and NPV of 91.9% and 98.2% were obtained for median sensory NCV with interpolation technique. Conclusions: Median motor interpolation method is a good technique, but it has less sensitivity and specificity than median sensory interpolation method.
Directory of Open Access Journals (Sweden)
Kresno Wikan Sadono
2016-12-01
Full Text Available Persamaan differensial banyak digunakan untuk menggambarkan berbagai fenomena dalam bidang sains dan rekayasa. Berbagai masalah komplek dalam kehidupan sehari-hari dapat dimodelkan dengan persamaan differensial dan diselesaikan dengan metode numerik. Salah satu metode numerik, yaitu metode meshfree atau meshless berkembang akhir-akhir ini, tanpa proses pembuatan elemen pada domain. Penelitian ini menggabungkan metode meshless yaitu radial basis point interpolation method (RPIM dengan integrasi waktu discontinuous Galerkin method (DGM, metode ini disebut RPIM-DGM. Metode RPIM-DGM diaplikasikan pada advection equation pada satu dimensi. RPIM menggunakan basis function multiquadratic function (MQ dan integrasi waktu diturunkan untuk linear-DGM maupun quadratic-DGM. Hasil simulasi menunjukkan, metode ini mendekati hasil analitis dengan baik. Hasil simulasi numerik dengan RPIM DGM menunjukkan semakin banyak node dan semakin kecil time increment menunjukkan hasil numerik semakin akurat. Hasil lain menunjukkan, integrasi numerik dengan quadratic-DGM untuk suatu time increment dan jumlah node tertentu semakin meningkatkan akurasi dibandingkan dengan linear-DGM. [Title: Numerical solution of advection equation with radial basis interpolation method and discontinuous Galerkin method for time integration] Differential equation is widely used to describe a variety of phenomena in science and engineering. A variety of complex issues in everyday life can be modeled with differential equations and solved by numerical method. One of the numerical methods, the method meshfree or meshless developing lately, without making use of the elements in the domain. The research combines methods meshless, i.e. radial basis point interpolation method with discontinuous Galerkin method as time integration method. This method is called RPIM-DGM. The RPIM-DGM applied to one dimension advection equation. The RPIM using basis function multiquadratic function and time
The effect of interpolation methods in temperature and salinity trends in the Western Mediterranean
Directory of Open Access Journals (Sweden)
M. VARGAS-YANEZ
2012-04-01
Full Text Available Temperature and salinity data in the historical record are scarce and unevenly distributed in space and time and the estimation of linear trends is sensitive to different factors. In the case of the Western Mediterranean, previous works have studied the sensitivity of these trends to the use of bathythermograph data, the averaging methods or the way in which gaps in time series are dealt with. In this work, a new factor is analysed: the effect of data interpolation. Temperature and salinity time series are generated averaging existing data over certain geographical areas and also by means of interpolation. Linear trends from both types of time series are compared. There are some differences between both estimations for some layers and geographical areas, while in other cases the results are consistent. Those results which do not depend on the use of interpolated or non-interpolated data, neither are influenced by data analysis methods can be considered as robust ones. Those results influenced by the interpolation process or the factors analysed in previous sensitivity tests are not considered as robust results.
Linear, Transﬁnite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images
DEFF Research Database (Denmark)
Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen
2018-01-01
When performing a line scan using optical coherence tomography (OCT), the distance between the successive scan lines is often large compared to the resolution along each scan line. If two sets of such line scans are acquired orthogonal to each other, intensity values are known along the lines...... of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... scans, acquired such that the lines of the second scan are orthogonal to the first....
Implementation of D-Spline-Based Incremental Performance Parameter Estimation Method with ppOpen-AT
Directory of Open Access Journals (Sweden)
Teruo Tanaka
2014-01-01
Full Text Available In automatic performance tuning (AT, a primary aim is to optimize performance parameters that are suitable for certain computational environments in ordinary mathematical libraries. For AT, an important issue is to reduce the estimation time required for optimizing performance parameters. To reduce the estimation time, we previously proposed the Incremental Performance Parameter Estimation method (IPPE method. This method estimates optimal performance parameters by inserting suitable sampling points that are based on computational results for a fitting function. As the fitting function, we introduced d-Spline, which is highly adaptable and requires little estimation time. In this paper, we report the implementation of the IPPE method with ppOpen-AT, which is a scripting language (set of directives with features that reduce the workload of the developers of mathematical libraries that have AT features. To confirm the effectiveness of the IPPE method for the runtime phase AT, we applied the method to sparse matrix–vector multiplication (SpMV, in which the block size of the sparse matrix structure blocked compressed row storage (BCRS was used for the performance parameter. The results from the experiment show that the cost was negligibly small for AT using the IPPE method in the runtime phase. Moreover, using the obtained optimal value, the execution time for the mathematical library SpMV was reduced by 44% on comparing the compressed row storage and BCRS (block size 8.
Directory of Open Access Journals (Sweden)
Shanshan He
2015-10-01
Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.
Hittmeir, Sabine; Philipp, Anne; Seibert, Petra
2017-04-01
In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To
A multiparametric method of interpolation using WOA05 applied to anthropogenic CO2 in the Atlantic
Directory of Open Access Journals (Sweden)
Anton Velo
2010-11-01
Full Text Available This paper describes the development of a multiparametric interpolation method and its application to anthropogenic carbon (CANT in the Atlantic, calculated by two estimation methods using the CARINA database. The multiparametric interpolation proposed uses potential temperature (θ, salinity, conservative ‘NO’ and ‘PO’ as conservative parameters for the gridding, and the World Ocean Atlas (WOA05 as a reference for the grid structure and the indicated parameters. We thus complement CARINA data with WOA05 database in an attempt to obtain better gridded values by keeping the physical-biogeochemical sea structures. The algorithms developed here also have the prerequisite of being simple and easy to implement. To test the improvements achieved, a comparison between the proposed multiparametric method and a pure spatial interpolation for an independent parameter (O2 was made. As an application case study, CANT estimations by two methods (φCTº and TrOCA were performed on the CARINA database and then gridded by both interpolation methods (spatial and multiparametric. Finally, a calculation of CANT inventories for the whole Atlantic Ocean was performed with the gridded values and using ETOPO2v2 as the sea bottom. Thus, the inventories were between 55.1 and 55.2 Pg-C with the φCTº method and between 57.9 and 57.6 Pg-C with the TrOCA method.
Interpolation of meteorological data by kriging method for use in forestry
Directory of Open Access Journals (Sweden)
Ivetić Vladan
2010-01-01
Full Text Available Interpolation is a suitable method of computing the values of a spatial variable at the location which is impossible for measurement, based on the data obtained by the measurement of the same variable at the predetermined locations (e.g. weather stations. In this paper, temperature and rainfall values at 39 weather stations in Serbia and neighbouring countries were interpolated aiming at the research in forestry. The study results are presented in the form of an interactive map of Serbia, which allows a fast and simple determination of the analyzed variable at any point within its territory, which is presented by the example of 27 forest sites.
Modeling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines
Kammann, P.
2005-12-01
Our intention is the modeling of seismic wave propagation from displacement measurements by seismographs at the Earth's surface. The elastic behaviour of the Earth is usually described by the Cauchy-Navier equation. A system of fundamental solutions for the Fourier transformed Cauchy-Navier equation are the Hansen vectors L, M and N. We apply an inverse Fourier transform to obtain an orthonormal function system depending on time and space. By means of this system we construct certain splines, which are then used for interpolating the given data. Compared to polynomial interpolation, splines have the advantage that they minimize some curvature measure and are, therefore, smoother. First, we test this method on a synthetic wave function. Afterwards, we apply it to realistic earthquake data. (P. Kammann, Modelling Seismic Wave Propagation Using Time-Dependent Cauchy-Navier Splines, Diploma Thesis, Geomathematics Group, Department of Mathematics, University of Kaiserslautern, 2005)
The Interpolation Method for Estimating the Above-Ground Biomass Using Terrestrial-Based Inventory
Directory of Open Access Journals (Sweden)
I Nengah Surati Jaya
2014-09-01
Full Text Available This paper examined several methods for interpolating biomass on logged-over dry land forest using terrestrial-based forest inventory in Labanan, East Kalimantan and Lamandau, Kota Wringing Barat, Central Kalimantan. The plot-distances examined was 1,000−1,050 m for Labanan and 1,000−899m for Lawanda. The main objective of this study was to obtain the best interpolation method having the most accurate prediction on spatial distribution of forest biomass for dry land forest. Two main interpolation methods were examined: (1 deterministic approach using the IDW method and (2 geo-statistics approach using Kriging with spherical, circular, linear, exponential, and Gaussian models. The study results at both sites consistently showed that the IDW method was better than the Kriging method for estimating the spatial distribution of biomass. The validation results using chi-square test showed that the IDW interpolation provided accurate biomass estimation. Using the percentage of mean deviation value (MD(%, it was also recognized that the IDWs with power parameter (p of 2 provided relatively low value , i.e., only 15% for Labanan, East Kalimantan Province and 17% for Lamandau, Kota Wringing Barat Central Kalimantan Province. In general, IDW interpolation method provided better results than the Kriging, where the Kriging method provided MD(% of about 27% and 21% for Lamandau and Labanan sites, respectively.Keywords: deterministic, geostatistics, IDW, Kriging, above-groung biomass
Wu, Wei; Tang, Xiao-Ping; Ma, Xue-Qing; Liu, Hong-Bin
2016-08-01
Soil temperature variability data provide valuable information on understanding land-surface ecosystem processes and climate change. This study developed and analyzed a spatial dataset of monthly mean soil temperature at a depth of 10 cm over a complex topographical region in southwestern China. The records were measured at 83 stations during the period of 1961-2000. Nine approaches were compared for interpolating soil temperature. The accuracy indicators were root mean square error (RMSE), modelling efficiency (ME), and coefficient of residual mass (CRM). The results indicated that thin plate spline with latitude, longitude, and elevation gave the best performance with RMSE varying between 0.425 and 0.592 °C, ME between 0.895 and 0.947, and CRM between -0.007 and 0.001. A spatial database was developed based on the best model. The dataset showed that larger seasonal changes of soil temperature were from autumn to winter over the region. The northern and eastern areas with hilly and low-middle mountains experienced larger seasonal changes.
Application Of Prony's Method To Data On Viscoelasticity
Rodriguez, Pedro I.
1988-01-01
Prony coefficients found by computer program, without trial and error. Computational method and computer program developed to exploit full potential of Prony's interpolation method in analysis of experimental data on relaxation modules of viscoelastic material. Prony interpolation curve chosen to give least-squares best fit to "B-spline" interpolation of experimental data.
Quadrotor system identification using the multivariate multiplex b-spline
Visser, T.; De Visser, C.C.; Van Kampen, E.J.
2015-01-01
A novel method for aircraft system identification is presented that is based on a new multivariate spline type; the multivariate multiplex B-spline. The multivariate multiplex B-spline is a generalization of the recently introduced tensor-simplex B-spline. Multivariate multiplex splines obtain
Smooth Phase Interpolated Keying
Borah, Deva K.
2007-01-01
Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values
A spectral/B-spline method for the Navier-Stokes equations in unbounded domains
Dufresne, L
2003-01-01
The numerical method presented in this paper aims at solving the incompressible Navier-Stokes equations in unbounded domains. The problem is formulated in cylindrical coordinates and the method is based on a Galerkin approximation scheme that makes use of vector expansions that exactly satisfy the continuity constraint. More specifically, the divergence-free basis vector functions are constructed with Fourier expansions in the theta and z directions while mapped B-splines are used in the semi-infinite radial direction. Special care has been taken to account for the particular analytical behaviors at both end points r=0 and r-> infinity. A modal reduction algorithm has also been implemented in the azimuthal direction, allowing for a relaxation of the CFL constraint on the timestep size and a possibly significant reduction of the number of DOF. The time marching is carried out using a mixed quasi-third order scheme. Besides the advantages of a divergence-free formulation and a quasi-spectral convergence, the lo...
EXAMINATION OF THE VISUAL ACUITY ON THE LCD OPTOTYPE WITH WHOLE-LINE AND INTERPOLATION METHOD
Zajíček Tomáš; Veselý Petr; Veselý Petr; Synek Svatopluk; Synek Svatopluk
2012-01-01
The goal of this work is to show the possibility of us using the LCD optotype in common optometrist practice. Furthermore two commonly used methods for measuring visual acuity will be compared. 69 respondents were used for the measurements. The respondents were divided into two groups according to the measured LCD optotype. The visual acuity was measured using the whole-line method on modified Snellen charts as well as the interpolation method on ETDRS charts. Measurements were taken on the S...
C2-rational cubic spline involving tension parameters
Indian Academy of Sciences (India)
preferred which preserves some of the characteristics of the function to be interpolated. In order to tackle such ... Shape preserving properties of the rational (cubic/quadratic) spline interpolant have been studied ... tension parameters which is used to interpolate the given monotonic data is described in. [6]. Shape preserving ...
Zhu, Zhongxia; Janunts, Edgar; Eppig, Timo; Sauer, Tomas; Langenbucher, Achim
2010-01-01
The aim of this study is to represent the corneal anterior surface by utilizing radius and height data extracted from a TMS-2N topographic system with three different mathematical approaches and to simulate the visual performance. An iteratively re-weighted bi-cubic spline method is introduced for the local representation of the corneal surface. For comparison, two standard mathematical global representation approaches are used: the general quadratic function and the higher order Taylor polynomial approach. First, these methods were applied in simulations using three corneal models. Then, two real eye examples were investigated: one eye with regular astigmatism, and one eye which had undergone refractive surgery. A ray-tracing program was developed to evaluate the imaging performance of these examples with each surface representation strategy at the best focus plane. A 6 mm pupil size was chosen for the simulation. The fitting error (deviation) of the presented methods was compared. It was found that the accuracy of the topography representation was worst using the quadratic function and best with bicubic spline. The quadratic function cannot precisely describe the irregular corneal shape. In order to achieve a sub-micron fitting precision, the Taylor polynomial's order selection behaves adaptive to the corneal shape. The bi-cubic spline shows more stable performance. Considering the visual performance, the more precise the cornea representation is, the worse the visual performance is. The re-weighted bi-cubic spline method is a reasonable and stable method for representing the anterior corneal surface in measurements using a Placido-ring-pattern-based corneal topographer. Copyright © 2010. Published by Elsevier GmbH.
Directory of Open Access Journals (Sweden)
H. S. Shukla
2015-01-01
Full Text Available In this paper, a modified cubic B-spline differential quadrature method (MCB-DQM is employed for the numerical simulation of two-space dimensional nonlinear sine-Gordon equation with appropriate initial and boundary conditions. The modified cubic B-spline works as a basis function in the differential quadrature method to compute the weighting coefficients. Accordingly, two dimensional sine-Gordon equation is transformed into a system of second order ordinary differential equations (ODEs. The resultant system of ODEs is solved by employing an optimal five stage and fourth-order strong stability preserving Runge–Kutta scheme (SSP-RK54. Numerical simulation is discussed for both damped and undamped cases. Computational results are found to be in good agreement with the exact solution and other numerical results available in the literature.
National Research Council Canada - National Science Library
Ingel, R
1999-01-01
... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...
Analysis of velocity planning interpolation algorithm based on NURBS curve
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
2017-04-01
To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.
Restoring the missing features of the corrupted speech using linear interpolation methods
Rassem, Taha H.; Makbol, Nasrin M.; Hasan, Ali Muttaleb; Zaki, Siti Syazni Mohd; Girija, P. N.
2017-10-01
One of the main challenges in the Automatic Speech Recognition (ASR) is the noise. The performance of the ASR system reduces significantly if the speech is corrupted by noise. In spectrogram representation of a speech signal, after deleting low Signal to Noise Ratio (SNR) elements, the incomplete spectrogram is obtained. In this case, the speech recognizer should make modifications to the spectrogram in order to restore the missing elements, which is one direction. In another direction, speech recognizer should be able to restore the missing elements due to deleting low SNR elements before performing the recognition. This is can be done using different spectrogram reconstruction methods. In this paper, the geometrical spectrogram reconstruction methods suggested by some researchers are implemented as a toolbox. In these geometrical reconstruction methods, the linear interpolation along time or frequency methods are used to predict the missing elements between adjacent observed elements in the spectrogram. Moreover, a new linear interpolation method using time and frequency together is presented. The CMU Sphinx III software is used in the experiments to test the performance of the linear interpolation reconstruction method. The experiments are done under different conditions such as different lengths of the window and different lengths of utterances. Speech corpus consists of 20 males and 20 females; each one has two different utterances are used in the experiments. As a result, 80% recognition accuracy is achieved with 25% SNR ratio.
An Online Method for Interpolating Linear Parametric Reduced-Order Models
Amsallem, David
2011-01-01
A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.
Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng
2018-03-01
Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.
An edge-directed interpolation method for fetal spine MR images.
Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin
2013-10-10
Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains
Slemp, Wesley C. H.; Kapania, Rakesh K.; Tessler, Alexander
2010-01-01
Computation of interlaminar stresses from the higher-order shear and normal deformable beam theory and the refined zigzag theory was performed using the Sinc method based on Interpolation of Highest Derivative. The Sinc method based on Interpolation of Highest Derivative was proposed as an efficient method for determining through-the-thickness variations of interlaminar stresses from one- and two-dimensional analysis by integration of the equilibrium equations of three-dimensional elasticity. However, the use of traditional equivalent single layer theories often results in inaccuracies near the boundaries and when the lamina have extremely large differences in material properties. Interlaminar stresses in symmetric cross-ply laminated beams were obtained by solving the higher-order shear and normal deformable beam theory and the refined zigzag theory with the Sinc method based on Interpolation of Highest Derivative. Interlaminar stresses and bending stresses from the present approach were compared with a detailed finite element solution obtained by ABAQUS/Standard. The results illustrate the ease with which the Sinc method based on Interpolation of Highest Derivative can be used to obtain the through-the-thickness distributions of interlaminar stresses from the beam theories. Moreover, the results indicate that the refined zigzag theory is a substantial improvement over the Timoshenko beam theory due to the piecewise continuous displacement field which more accurately represents interlaminar discontinuities in the strain field. The higher-order shear and normal deformable beam theory more accurately captures the interlaminar stresses at the ends of the beam because it allows transverse normal strain. However, the continuous nature of the displacement field requires a large number of monomial terms before the interlaminar stresses are computed as accurately as the refined zigzag theory.
Directory of Open Access Journals (Sweden)
Ly, S.
2013-01-01
Full Text Available Watershed management and hydrological modeling require data related to the very important matter of precipitation, often measured using raingages or weather stations. Hydrological models often require a preliminary spatial interpolation as part of the modeling process. The success of spatial interpolation varies according to the type of model chosen, its mode of geographical management and the resolution used. The quality of a result is determined by the quality of the continuous spatial rainfall, which ensues from the interpolation method used. The objective of this article is to review the existing methods for interpolation of rainfall data that are usually required in hydrological modeling. We review the basis for the application of certain common methods and geostatistical approaches used in interpolation of rainfall. Previous studies have highlighted the need for new research to investigate ways of improving the quality of rainfall data and ultimately, the quality of hydrological modeling.
Interpolation of vector fields from human cardiac DT-MRI
Yang, F.; Zhu, Y. M.; Rapacchi, S.; Luo, J. H.; Robini, M.; Croisille, P.
2011-03-01
There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.
Interpolation of vector fields from human cardiac DT-MRI
International Nuclear Information System (INIS)
Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H
2011-01-01
There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.
Estimation of missing rainfall data using spatial interpolation and imputation methods
Radi, Noor Fadhilah Ahmad; Zakaria, Roslinazairimah; Azman, Muhammad Az-zuhri
2015-02-01
This study is aimed to estimate missing rainfall data by dividing the analysis into three different percentages namely 5%, 10% and 20% in order to represent various cases of missing data. In practice, spatial interpolation methods are chosen at the first place to estimate missing data. These methods include normal ratio (NR), arithmetic average (AA), coefficient of correlation (CC) and inverse distance (ID) weighting methods. The methods consider the distance between the target and the neighbouring stations as well as the correlations between them. Alternative method for solving missing data is an imputation method. Imputation is a process of replacing missing data with substituted values. A once-common method of imputation is single-imputation method, which allows parameter estimation. However, the single imputation method ignored the estimation of variability which leads to the underestimation of standard errors and confidence intervals. To overcome underestimation problem, multiple imputations method is used, where each missing value is estimated with a distribution of imputations that reflect the uncertainty about the missing data. In this study, comparison of spatial interpolation methods and multiple imputations method are presented to estimate missing rainfall data. The performance of the estimation methods used are assessed using the similarity index (S-index), mean absolute error (MAE) and coefficient of correlation (R).
Directory of Open Access Journals (Sweden)
Tsugio Fukuchi
2014-06-01
Full Text Available The finite difference method (FDM based on Cartesian coordinate systems can be applied to numerical analyses over any complex domain. A complex domain is usually taken to mean that the geometry of an immersed body in a fluid is complex; here, it means simply an analytical domain of arbitrary configuration. In such an approach, we do not need to treat the outer and inner boundaries differently in numerical calculations; both are treated in the same way. Using a method that adopts algebraic polynomial interpolations in the calculation around near-wall elements, all the calculations over irregular domains reduce to those over regular domains. Discretization of the space differential in the FDM is usually derived using the Taylor series expansion; however, if we use the polynomial interpolation systematically, exceptional advantages are gained in deriving high-order differences. In using the polynomial interpolations, we can numerically solve the Poisson equation freely over any complex domain. Only a particular type of partial differential equation, Poisson's equations, is treated; however, the arguments put forward have wider generality in numerical calculations using the FDM.
An efficient approach to numerical study of the coupled-BBM system with B-spline collocation method
Directory of Open Access Journals (Sweden)
khalid ali
2016-11-01
Full Text Available In the present paper, a numerical method is proposed for the numerical solution of a coupled-BBM system with appropriate initial and boundary conditions by using collocation method with cubic trigonometric B-spline on the uniform mesh points. The method is shown to be unconditionally stable using von-Neumann technique. To test accuracy the error norms2L, ?L are computed. Furthermore, interaction of two and three solitary waves are used to discuss the effect of the behavior of the solitary waves after the interaction. These results show that the technique introduced here is easy to apply. We make linearization for the nonlinear term.
Vnukov, A. A.; Shershnev, M. B.
2018-01-01
The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.
Directory of Open Access Journals (Sweden)
Annalisa Di Piazza
2015-04-01
Full Text Available An exhaustive comparison among different spatial interpolation algorithms was carried out in order to derive annual and monthly air temperature maps for Sicily (Italy. Deterministic, data-driven and geostatistics algorithms were used, in some cases adding the elevation information and other physiographic variables to improve the performance of interpolation techniques and the reconstruction of the air temperature field. The dataset is given by air temperature data coming from 84 stations spread around the island of Sicily. The interpolation algorithms were optimized by using a subset of the available dataset, while the remaining subset was used to validate the results in terms of the accuracy and bias of the estimates. Validation results indicate that univariate methods, which neglect the information from physiographic variables, significantly entail the largest errors, while performances improve when such parameters are taken into account. The best results at the annual scale have been obtained using the the ordinary kriging of residuals from linear regression and from the artificial neural network algorithm, while, at the monthly scale, a Fourier-series algorithm has been used to downscale mean annual temperature to reproduce monthly values in the annual cycle.
Energy Technology Data Exchange (ETDEWEB)
Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)
2015-05-15
Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.
The interpolation method of stochastic functions and the stochastic variational principle
International Nuclear Information System (INIS)
Liu Xianbin; Chen Qiu
1993-01-01
-order stochastic finite element equations are not very reasonable. On the other hand, Galerkin Method is hopeful, along with the method, the projection principle had been advanced to solve the stochastic operator equations. In Galerkin Method, by means of projecting the stochastic solution functions into the subspace of the solution function space, the treatment of the stochasticity of the structural physical properties and the loads is reasonable. However, the construction or the selection of the subspace of the solution function space which is a Hilbert Space of stochastic functions is difficult, and furthermore it is short of a reasonable rule to measure whether the approximation of the subspace to the solution function space is fine or not. In stochastic finite element method, the discretization of stochastic functions in space and time shows a very importance, so far, the discrete patterns consist of Local Average Theory, Interpolation Method and Orthogonal Expansion Method. Although the Local Average Theory has already been a success in the stationary random fields, it is not suitable for the non-stationary ones as well. For the general stochastic functions, whether it is stationary or not, interpolation method is available. In the present paper, the authors have shown that the error between the true solution function and its approximation, its projection in the subspace, depends continuously on the errors between the stochastic functions and their interpolation functions, the latter rely continuously on the scales of the discrete elements; so a conclusion can be obtained that the Interpolation method of stochastic functions is convergent. That is to say that the approximation solution functions would limit to the true solution functions when the scales of the discrete elements goes smaller and smaller. Using the Interpolation method, a basis of subspace of the solution function space is constructed in this paper, and by means of combining the projection principle and
Mohammadi, Seyedeh Atefeh; Azadi, Majid; Rahmani, Morteza
2017-08-01
All numerical weather prediction (NWP) models inherently have substantial biases, especially in the forecast of near-surface weather variables. Statistical methods can be used to remove the systematic error based on historical bias data at observation stations. However, many end users of weather forecasts need bias corrected forecasts at locations that scarcely have any historical bias data. To circumvent this limitation, the bias of surface temperature forecasts on a regular grid covering Iran is removed, by using the information available at observation stations in the vicinity of any given grid point. To this end, the running mean error method is first used to correct the forecasts at observation stations, then four interpolation methods including inverse distance squared weighting with constant lapse rate (IDSW-CLR), Kriging with constant lapse rate (Kriging-CLR), gradient inverse distance squared with linear lapse rate (GIDS-LR), and gradient inverse distance squared with lapse rate determined by classification and regression tree (GIDS-CART), are employed to interpolate the bias corrected forecasts at neighboring observation stations to any given location. The results show that all four interpolation methods used do reduce the model error significantly, but Kriging-CLR has better performance than the other methods. For Kriging-CLR, root mean square error (RMSE) and mean absolute error (MAE) were decreased by 26% and 29%, respectively, as compared to the raw forecasts. It is found also, that after applying any of the proposed methods, unlike the raw forecasts, the bias corrected forecasts do not show spatial or temporal dependency.
Galerkin method for unsplit 3-D Dirac equation using atomically/kinetically balanced B-spline basis
Energy Technology Data Exchange (ETDEWEB)
Fillion-Gourdeau, F., E-mail: filliong@CRM.UMontreal.ca [Université du Québec, INRS – Énergie, Matériaux et Télécommunications, Varennes, J3X 1S2 (Canada); Centre de Recherches Mathématiques, Université de Montréal, Montréal, H3T 1J4 (Canada); Lorin, E., E-mail: elorin@math.carleton.ca [School of Mathematics and Statistics, Carleton University, Ottawa, K1S 5B6 (Canada); Centre de Recherches Mathématiques, Université de Montréal, Montréal, H3T 1J4 (Canada); Bandrauk, A.D., E-mail: andre.bandrauk@usherbrooke.ca [Laboratoire de Chimie Théorique, Faculté des Sciences, Université de Sherbrooke, Sherbrooke, J1K 2R1 (Canada); Centre de Recherches Mathématiques, Université de Montréal, Montréal, H3T 1J4 (Canada)
2016-02-15
A Galerkin method is developed to solve the time-dependent Dirac equation in prolate spheroidal coordinates for an electron–molecular two-center system. The initial state is evaluated from a variational principle using a kinetic/atomic balanced basis, which allows for an efficient and accurate determination of the Dirac spectrum and eigenfunctions. B-spline basis functions are used to obtain high accuracy. This numerical method is used to compute the energy spectrum of the two-center problem and then the evolution of eigenstate wavefunctions in an external electromagnetic field.
Galerkin method for unsplit 3-D Dirac equation using atomically/kinetically balanced B-spline basis
International Nuclear Information System (INIS)
Fillion-Gourdeau, F.; Lorin, E.; Bandrauk, A.D.
2016-01-01
A Galerkin method is developed to solve the time-dependent Dirac equation in prolate spheroidal coordinates for an electron–molecular two-center system. The initial state is evaluated from a variational principle using a kinetic/atomic balanced basis, which allows for an efficient and accurate determination of the Dirac spectrum and eigenfunctions. B-spline basis functions are used to obtain high accuracy. This numerical method is used to compute the energy spectrum of the two-center problem and then the evolution of eigenstate wavefunctions in an external electromagnetic field.
Lagrange polynomial interpolation method applied in the calculation of the J({xi},{beta}) function
Energy Technology Data Exchange (ETDEWEB)
Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro [Centro Federal de Educacao Tecnologica de Quimica de Nilopolis, RJ (Brazil)]. E-mails: munhoz.vf@gmail.com; dpalma@cefeteq.br; Martinez, Aquilino Senra [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE) (COPPE). Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br
2008-07-01
The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)
Lagrange polynomial interpolation method applied in the calculation of the J(ξ,β) function
International Nuclear Information System (INIS)
Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro; Martinez, Aquilino Senra
2008-01-01
The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)
A Galerkin Solution for Burgers' Equation Using Cubic B-Spline Finite Elements
Soliman, A. A.
2012-01-01
Numerical solutions for Burgers’ equation based on the Galerkins’ method using cubic B-splines as both weight and interpolation functions are set up. It is shown that this method is capable of solving Burgers’ equation accurately for values of viscosity ranging from very small to large. Three standard problems are used to validate the proposed algorithm. A linear stability analysis shows that a numerical scheme based on a Cranck-Nicolson approximation in time is unconditionally stable.
A Galerkin Solution for Burgers' Equation Using Cubic B-Spline Finite Elements
Directory of Open Access Journals (Sweden)
A. A. Soliman
2012-01-01
Full Text Available Numerical solutions for Burgers’ equation based on the Galerkins’ method using cubic B-splines as both weight and interpolation functions are set up. It is shown that this method is capable of solving Burgers’ equation accurately for values of viscosity ranging from very small to large. Three standard problems are used to validate the proposed algorithm. A linear stability analysis shows that a numerical scheme based on a Cranck-Nicolson approximation in time is unconditionally stable.
Directory of Open Access Journals (Sweden)
Jigisha U. Pandya
2012-01-01
Full Text Available The behavior of the non-linear-coupled systems arising in axially symmetric hydromagnetics flow between two horizontal plates in a rotating system is analyzed, where the lower is a stretching sheet and upper is a porous solid plate. The equations of conservation of mass and momentum are transformed to a system of coupled nonlinear ordinary differential equations. These equations for the velocity field are solved numerically by using quintic spline collocation method. To solve the nonlinear equation, quasilinearization technique has been used. The numerical results are presented through graphs, in which the effects of viscosity, through flow, magnetic flux, and rotational velocity on velocity field are discussed.
Conformal Interpolating Algorithm Based on Cubic NURBS in Aspheric Ultra-Precision Machining
International Nuclear Information System (INIS)
Li, C G; Zhang, Q R; Cao, C G; Zhao, S L
2006-01-01
Numeric control machining and on-line compensation for aspheric surface are key techniques in ultra-precision machining. In this paper, conformal cubic NURBS interpolating curve is applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal cubic NURBS interpolation, we compare it with the linear interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by cubic NURBS is higher than by line. The algorithm is benefit to increasing the surface form precision of workpieces in ultra-precision machining
Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord
2017-04-01
This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.
A new method for reducing DNL in nuclear ADCs using an interpolation technique
International Nuclear Information System (INIS)
Vaidya, P.P.; Gopalakrishnan, K.R.; Pethe, V.A.; Anjaneyulu, T.
1986-01-01
The paper describes a new method for reducing the DNL associated with nuclear ADCs. The method named the ''interpolation technique'' is utilized to derive the quantisation steps corresponding to the last n bits of the digital code by dividing quantisation steps due to higher significant bits of the DAC, using a chain of resistors. Using comparators, these quantisation steps are compared with the analog voltage to be digitized, which is applied as a voltage shift at both ends of this chain. The output states of the comparators define the n bit code. The errors due to offset voltages and bias currents of the comparators are statistically neutralized by changing the polarity of quantisation steps as well as the polarity of analog voltage (corresponding to last n bits) for alternate A/D conversion. The effect of averaging on the channel profile can be minimized. A 12 bit ADC was constructured using this technique which gives DNL of less than +-1% over most of the channels for conversion time of nearly 4.5 μs. Gatti's sliding scale technique can be implemented for further reduction of DNL. The interpolation technique has a promising potential of improving the resolution of existing 12 bit ADCs to 16 bit, without degrading the percentage DNL significantly. (orig.)
International Nuclear Information System (INIS)
Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng
2017-01-01
The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)
Denisenko, M. V.; Klenov, N. V.; Satanin, A. M.
2018-01-01
In this article the dynamics of the qubits states based on solution of the time-dependent Schrödinger equation is investigated. Using the Magnus method we obtain an explicit interpolation representation for the propagator, which allows to find wave function at an arbitrary time. To illustrate the effectiveness of the approach, the population of the levels a single and two coupled qubits have been calculated by applying the Magnus propagator and the result have been compared with the numerical solution of the Schrödinger equation. As a measure of the approximation of the wave function, we calculate fidelity, which indicates proximity when the exact and approximate evolution operator acts on the initial state. We discuss the possibility of extending the developed methods to multi-qubits system, when high-speed calculation methods of the operators of evolution is particularly relevant.
Roy, Subrata P.
2014-01-28
The method of moments with interpolative closure (MOMIC) for soot formation and growth provides a detailed modeling framework maintaining a good balance in generality, accuracy, robustness, and computational efficiency. This study presents several computational issues in the development and implementation of the MOMIC-based soot modeling for direct numerical simulations (DNS). The issues of concern include a wide dynamic range of numbers, choice of normalization, high effective Schmidt number of soot particles, and realizability of the soot particle size distribution function (PSDF). These problems are not unique to DNS, but they are often exacerbated by the high-order numerical schemes used in DNS. Four specific issues are discussed in this article: the treatment of soot diffusion, choice of interpolation scheme for MOMIC, an approach to deal with strongly oxidizing environments, and realizability of the PSDF. General, robust, and stable approaches are sought to address these issues, minimizing the use of ad hoc treatments such as clipping. The solutions proposed and demonstrated here are being applied to generate new physical insight into complex turbulence-chemistry-soot-radiation interactions in turbulent reacting flows using DNS. © 2014 Copyright Taylor and Francis Group, LLC.
C2-rational cubic spline involving tension parameters
Indian Academy of Sciences (India)
In the present paper, 1-piecewise rational cubic spline function involving tension parameters is considered which produces a monotonic interpolant to a given monotonic data set. It is observed that under certain conditions the interpolant preserves the convexity property of the data set. The existence and uniqueness of a ...
Color management with a hammer: the B-spline fitter
Bell, Ian E.; Liu, Bonny H. P.
2003-01-01
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
Dahiya, Sumita; Mittal, Ramesh Chandra
2017-07-01
This paper employs a differential quadrature scheme for solving non-linear partial differential equations. Differential quadrature method (DQM), along with modified cubic B-spline basis, has been adopted to deal with three-dimensional non-linear Brusselator system, enzyme kinetics of Michaelis-Menten type problem and Burgers' equation. The method has been tested efficiently to three-dimensional equations. Simple algorithm and minimal computational efforts are two of the major achievements of the scheme. Moreover, this methodology produces numerical solutions not only at the knot points but also at every point in the domain under consideration. Stability analysis has been done. The scheme provides convergent approximate solutions and handles different cases and is particularly beneficial to higher dimensional non-linear PDEs with irregularities in initial data or initial-boundary conditions that are discontinuous in nature, because of its capability of damping specious oscillations induced by high frequency components of solutions.
Calibration of Pyrometers by Using Extrapolation and Interpolation Methods at NIM
Lu, X.; Yuan, Z.; Wang, J.; Bai, C.; Wang, T.; Dong, W.
2018-01-01
High-temperature fixed points (HTFPs) have been thoroughly investigated, and the performance of variable temperature blackbodies (VTBB) has also improved rapidly. These two are beginning to be used in the calibration of pyrometers; however, tungsten strip lamps (STSL) still play a role in the dissemination of the high-temperature scale in China. International Temperature Scale of 1990 values of HTFPs and the lamps were assigned on a primary standard pyrometer (PSP) and were traced to the primary standard of the high-temperature scale at the National Institute of Metrology. In this paper, two pyrometers calibrated by using extrapolation and interpolation methods are reported. The values of the calibration were compared against the STSL values and the PSP values on HTBB, and their uncertainties are calculated as well. Because the stability of the HTFPs was better than that of the lamps, the calibration chains based on the lamps are starting to be replaced by HTFPs and VTBBs in China.
Identification method for digital image forgery and filtering region through interpolation.
Hwang, Min Gu; Har, Dong Hwan
2014-09-01
Because of the rapidly increasing use of digital composite images, recent studies have identified digital forgery and filtering regions. This research has shown that interpolation, which is used to edit digital images, is an effective way to analyze digital images for composite regions. Interpolation is widely used to adjust the size of the image of a composite target, making the composite image seem natural by rotating or deforming. As a result, many algorithms have been developed to identify composite regions by detecting a trace of interpolation. However, many limitations have been found in detection maps developed to identify composite regions. In this study, we analyze the pixel patterns of noninterpolation and interpolation regions. We propose a detection map algorithm to separate the two regions. To identify composite regions, we have developed an improved algorithm using minimum filer, Laplacian operation and maximum filters. Finally, filtering regions that used the interpolation operation are analyzed using the proposed algorithm. © 2014 American Academy of Forensic Sciences.
International Nuclear Information System (INIS)
Reyes Lopez, Y.; Yervilla Herrera, H.; Viamontes Esquivel, A.; Recarey Morfa, C. A.
2009-01-01
In the following paper we developed a new method to interpolate large volumes of scattered data, focused mainly on the results of the Mesh free Methods, Points Methods and the Particles Methods application. Through this one, we use local radial basis function as interpolating functions. We also use over-tree as the data structure that allows to accelerate the localization of the data that influences to interpolate the values at a new point, speeding up the application of scientific visualization techniques to generate images from large data volumes from the application of Mesh-free Methods, Points and Particle Methods, in the resolution of diverse models of physics-mathematics. As an example, the results obtained after applying this method using the local interpolation functions of Shepard are shown. (Author) 22 refs
Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang
2018-02-20
We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.
A node-based smoothed point interpolation method for dynamic analysis of rotating flexible beams
Du, C. F.; Zhang, D. G.; Li, L.; Liu, G. R.
2017-10-01
We proposed a mesh-free method, the called node-based smoothed point interpolation method (NS-PIM), for dynamic analysis of rotating beams. A gradient smoothing technique is used, and the requirements on the consistence of the displacement functions are further weakened. In static problems, the beams with three types of boundary conditions are analyzed, and the results are compared with the exact solution, which shows the effectiveness of this method and can provide an upper bound solution for the deflection. This means that the NS-PIM makes the system soften. The NS-PIM is then further extended for solving a rigid-flexible coupled system dynamics problem, considering a rotating flexible cantilever beam. In this case, the rotating flexible cantilever beam considers not only the transverse deformations, but also the longitudinal deformations. The rigid-flexible coupled dynamic equations of the system are derived via employing Lagrange's equations of the second type. Simulation results of the NS-PIM are compared with those obtained using finite element method (FEM) and assumed mode method. It is found that compared with FEM, the NS-PIM has anti-ill solving ability under the same calculation conditions.
Stein, A.
1991-01-01
The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are
Occlusion-Aware View Interpolation
Directory of Open Access Journals (Sweden)
Ince Serdar
2008-01-01
Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.
Dai, K. Y.; Liu, G. R.; Lim, K. M.; Han, X.; Du, S. Y.
A meshfree model is presented for the static and dynamic analyses of functionally graded material (FGM) plates based on the radial point interpolation method (PIM). In the present method, the mid-plane of an FGM plate is represented by a set of distributed nodes while the material properties in its thickness direction are computed analytically to take into account their continuous variations from one surface to another. Several examples are successfully analyzed for static deflections, natural frequencies and dynamic responses of FGM plates with different volume fraction exponents and boundary conditions. The convergence rate and accuracy are studied and compared with the finite element method (FEM). The effects of the constituent fraction exponent on static deflection as well as natural frequency are also investigated in detail using different FGM models. Based on the current material gradient, it is found that as the volume fraction exponent increases, the mechanical characteristics of the FGM plate approach those of the pure metal plate blended in the FGM.
National Research Council Canada - National Science Library
Ingel, R
1999-01-01
.... Projection operators are employed for the model reduction or condensation process. Interpolation is then introduced over a user defined frequency window, which can have real and imaginary boundaries and be quite large. Hermitian...
Synthesis of freeform refractive surfaces forming various radiation patterns using interpolation
Voznesenskaya, Anna; Mazur, Iana; Krizskiy, Pavel
2017-09-01
Optical freeform surfaces are very popular today in such fields as lighting systems, sensors, photovoltaic concentrators, and others. The application of such surfaces allows to obtain systems with a new quality with a reduced number of optical components to ensure high consumer characteristics: small size, weight, high optical transmittance. This article presents the methods of synthesis of refractive surface for a given source and the radiation pattern of various shapes using a computer simulation cubic spline interpolation.
Directory of Open Access Journals (Sweden)
Hosein Ghaffarzadeh
Full Text Available Abstract This paper investigates the numerical modeling of the flexural wave propagation in Euler-Bernoulli beams using the Hermite-type radial point interpolation method (HRPIM under the damage quantification approach. HRPIM employs radial basis functions (RBFs and their derivatives for shape function construction as a meshfree technique. The performance of Multiquadric(MQ RBF to the assessment of the reflection ratio was evaluated. HRPIM signals were compared with the theoretical and finite element responses. Results represent that MQ is a suitable RBF for HRPIM and wave propagation. However, the range of the proper shape parameters is notable. The number of field nodes is the main parameter for accurate wave propagation modeling using HRPIM. The size of support domain should be less thanan upper bound in order to prevent high error. With regard to the number of quadrature points, providing the minimum numbers of points are adequate for the stable solution, but the existence of more points in damage region does not leads to necessarily the accurate responses. It is concluded that the pure HRPIM, without any polynomial terms, is acceptable but considering a few terms will improve the accuracy; even though more terms make the problem unstable and inaccurate.
Directory of Open Access Journals (Sweden)
S. Safarpour
2017-09-01
Full Text Available Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD. The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS sensor onboard NASA’s Terra satellites, for the 10 years period of 2000 - 2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs yielded the best results for spring, summer and winter and ordinary kriging yielded the best results for fall.
Safarpour, S.; Abdullah, K.; Lim, H. S.; Dadras, M.
2017-09-01
Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA's Terra satellites, for the 10 years period of 2000 - 2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer and winter and ordinary kriging yielded the best results for fall.
A vertical parallax reduction method for stereoscopic video based on adaptive interpolation
Li, Qingyu; Zhao, Yan
2016-10-01
The existence of vertical parallax is the main factor of affecting the viewing comfort of stereo video. Visual fatigue is gaining widespread attention with the booming development of 3D stereoscopic video technology. In order to reduce the vertical parallax without affecting the horizontal parallax, a self-adaptive image scaling algorithm is proposed, which can use the edge characteristics efficiently. In the meantime, the nonlinear Levenberg-Marquardt (L-M) algorithm is introduced in this paper to improve the accuracy of the transformation matrix. Firstly, the self-adaptive scaling algorithm is used for the original image interpolation. When the pixel point of original image is in the edge areas, the interpretation is implemented adaptively along the edge direction obtained by Sobel operator. Secondly the SIFT algorithm, which is invariant to scaling, rotation and affine transformation, is used to detect the feature matching points from the binocular images. Then according to the coordinate position of matching points, the transformation matrix, which can reduce the vertical parallax, is calculated using Levenberg-Marquardt algorithm. Finally, the transformation matrix is applied to target image to calculate the new coordinate position of each pixel from the view image. The experimental results show that: comparing with the method which reduces the vertical parallax using linear algorithm to calculate two-dimensional projective transformation, the proposed method improves the vertical parallax reduction obviously. At the same time, in terms of the impact on horizontal parallax, the proposed method has more similar horizontal parallax to that of the original image after vertical parallax reduction. Therefore, the proposed method can optimize the vertical parallax reduction.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
Xu, A; Zhang, Y; Ran, T; Liu, H; Lu, S; Xu, J; Xiong, X; Jiang, Y; Lu, T; Chen, Y
2015-01-01
Bruton's tyrosine kinase (BTK) plays a crucial role in B-cell activation and development, and has emerged as a new molecular target for the treatment of autoimmune diseases and B-cell malignancies. In this study, two- and three-dimensional quantitative structure-activity relationship (2D and 3D-QSAR) analyses were performed on a series of pyridine and pyrimidine-based BTK inhibitors by means of genetic algorithm optimized multivariate adaptive regression spline (GA-MARS) and comparative molecular similarity index analysis (CoMSIA) methods. Here, we propose a modified MARS algorithm to develop 2D-QSAR models. The top ranked models showed satisfactory statistical results (2D-QSAR: Q(2) = 0.884, r(2) = 0.929, r(2)pred = 0.878; 3D-QSAR: q(2) = 0.616, r(2) = 0.987, r(2)pred = 0.905). Key descriptors selected by 2D-QSAR were in good agreement with the conclusions of 3D-QSAR, and the 3D-CoMSIA contour maps facilitated interpretation of the structure-activity relationship. A new molecular database was generated by molecular fragment replacement (MFR) and further evaluated with GA-MARS and CoMSIA prediction. Twenty-five pyridine and pyrimidine derivatives as novel potential BTK inhibitors were finally selected for further study. These results also demonstrated that our method can be a very efficient tool for the discovery of novel potent BTK inhibitors.
Spline fitting for multi-set data
International Nuclear Information System (INIS)
Zhou Hongmo; Liu Renqiu; Liu Tingjin
1987-01-01
A spline fit method and program for multi-set data have been developed. Improvements have been made to have new functions: any order of spline as base, knot optimization and accurate calculation for error of fit value. The program has been used for practical evaluation of nuclear data
Interpolation functors and interpolation spaces
Brudnyi, Yu A
1991-01-01
The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...
Directory of Open Access Journals (Sweden)
Mahacine Amrani
2008-06-01
Full Text Available Several methods are currently used to optimize edges and contours of geophysical data maps. A resistivity map was expectedto allow the electrical resistivity signal to be imaged in 2D in Moroccan resistivity survey in the phosphate mining domain. Anomalouszones of phosphate deposit “disturbances” correspond to resistivity anomalies. The resistivity measurements were taken at 5151discrete locations. Much of the geophysical spatial analysis requires a continuous data set and this study is designed to create that surface. This paper identifies the best spatial interpolation method to use for the creation of continuous data for Moroccan resistivity data of phosphate “disturbances” zones. The effectiveness of our approach for successfully reducing noise has been used much successin the analysis of stationary geophysical data as resistivity data. The interpolation filtering approach methods applied to modelingsurface phosphate “disturbances” was found to be consistently useful.
Density Deconvolution With EPI Splines
2015-09-01
Comparison of Deconvolution Methods . . . . . . . . . . . . . . . 28 5 High-Fidelity and Low-Fidelity Simulation Output 31 5.1 Hydrofoil Concept...46 A.3 Hydrofoil Concept . . . . . . . . . . . . . . . . . . . . . . . . 47 A.4 Notes on Computation Time...Epi-Spline Estimates . . . . . . . . . . . 28 Figure 4.3 Deconvolution Method Comparison . . . . . . . . . . . . . . . . 29 Figure 5.1 Hydrofoil
Material-Point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2007-01-01
The aim of this paper is to test different types of spatial interpolation for the material-point method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...... that the smoother field representation using the cubic splines yields a physically more realistic behaviour for impact problems than the traditional linear interpolation....
Material-point Method Analysis of Bending in Elastic Beams
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...... that the smoother field representation using the cubic splines yields a physically more realistic behaviour for impact problems than the traditional linear interpolation....
Error bounds for two even degree tridiagonal splines
Directory of Open Access Journals (Sweden)
Gary W. Howell
1990-01-01
Full Text Available We study a C(1 parabolic and a C(2 quartic spline which are determined by solution of a tridiagonal matrix and which interpolate subinterval midpoints. In contrast to the cubic C(2 spline, both of these algorithms converge to any continuous function as the length of the largest subinterval goes to zero, regardless of mesh ratios. For parabolic splines, this convergence property was discovered by Marsden [1974]. The quartic spline introduced here achieves this convergence by choosing the second derivative zero at the breakpoints. Many of Marsden's bounds are substantially tightened here. We show that for functions of two or fewer coninuous derivatives the quartic spline is shown to give yet better bounds. Several of the bounds given here are optimal.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model׳s parameters and transformed according to a specific
The research on NURBS adaptive interpolation technology
Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng
2017-04-01
In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.
Fatigue Crack Detection at Gearbox Spline Component using Acoustic Emission Method
2014-10-02
analytical understanding of gearmesh stiffness change with the tooth crack (Chaari et al. 2009, Chen and Shao 2011). Debris monitoring does not require...that the AE method is not sensitive to gear wear while the method detects the tooth crack earlier than the vibration method. Typical parameters...11-22. Chaari, F., Fakhfakh, T. and Haddar, M. (2009). “Analytical Modelling of Spur Gear Tooth Crack and Influence on Gearmesh Stiffness
Feature displacement interpolation
DEFF Research Database (Denmark)
Nielsen, Mads; Andresen, Per Rønsholt
1998-01-01
Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....
Calculating SPRT Interpolation Error
Filipe, E.; Gentil, S.; Lóio, I.; Bosma, R.; Peruzzi, A.
2018-02-01
Interpolation error is a major source of uncertainty in the calibration of standard platinum resistance thermometer (SPRT) in the subranges of the International Temperature Scale of 1990 (ITS-90). This interpolation error arises because the interpolation equations prescribed by the ITS-90 cannot perfectly accommodate all the SPRTs natural variations in the resistance-temperature behavior, and generates different forms of non-uniqueness. This paper investigates the type 3 non-uniqueness for fourteen SPRTs of five different manufacturers calibrated over the water-zinc subrange and demonstrates the use of the method of divided differences for calculating the interpolation error. The calculated maximum standard deviation of 0.25 mK (near 100°C) is similar to that observed in previous studies.
Spline Variational Theory for Composite Bolted Joints
National Research Council Canada - National Science Library
Iarve, E
1997-01-01
.... Two approaches were implemented. A conventional mesh overlay method in the crack region to satisfy the crack face boundary conditions and a novel spline basis partitioning method were compared...
Kaplan, A.; Smerdon, J. E.; Evans, M. N.
2010-12-01
Current-generation climate field reconstruction (CFR) methods, which are used to estimate, e.g., surface temperature values (t) at a predetermined grid from a synchronously available vector of proxy values (p), seek solutions assuming that a linear transform (B) connects deviations of these variables from their respective means tm and pm: t-tm=B(p-p_m). The transform operator B here would be a standard linear regression matrix B=Ctp}C{pp-1 (with Ctp= and Cpp= being cross-covariance and covariance matrices for t and p respectively) if only these matrices could be robustly calculated from available data. As things usually stand, however, instrumental data sets of t available for computing its cross-covariance with p can never provide more than 100-150 annual samples. On the other hand, due to a relatively low signal-to-noise ratio of individual proxy records, their assemblies used in global reconstructions normaly include on the order of 100 records or more. Hence various methods for regularizing the inversion of Cpp are used: ridge regression, truncated total least squares, canonical correlation analysis, local regression, etc. Suppose, however, that the target climate field is Gaussian with a known covariance C: t ˜{N}(tm,C), while a proxy vector is obtained from it by a known linear transform H (``proxy forward model''), subject to a Gaussian error: p=Ht+\\varepsilon, \\varepsilon ˜{N}(0,R). In this case, Cpp=HCHT+R and Ctp=CHT, so that under the regression solution given above becomes an optimal interpolation (OI) solution hat {t}=CHT(HCH^T+R)-1p with error covariance Q=C-CHT(HCH^T+R)-1HC. Moreover, the posterior distribution of t conditional on p is [t|p] ˜ {N}(hat {t},Q). If available climate records were very long, the distinction between the sample regression estimate and the better structured OI solution would be immaterial: the covariances estimated from the available sample would produce a result approaching the OI solution. However, under the reality
Nonlinear registration using B-spline feature approximation and image similarity
Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il
2001-07-01
The warping methods are broadly classified into the image-matching method based on similar pixel intensity distribution and the feature-matching method using distinct anatomical feature. Feature based methods may fail to match local variation of two images. However, the method globally matches features well. False matches corresponding to local minima of the underlying energy functions can be obtained through the similarity based methods. To avoid local minima problem, we proposes non-linear deformable registration method utilizing global information of feature matching and the local information of image matching. To define the feature, gray matter and white matter of brain tissue are segmented by Fuzzy C-Mean (FCM) algorithm. B-spline approximation technique is used for feature matching. We use a multi-resolution B-spline approximation method which modifies multilevel B-spline interpolation method. It locally changes the resolution of the control lattice in proportion to the distance between features of two images. Mutual information is used for similarity measure. The deformation fields are locally refined until maximize the similarity. In two 3D T1 weighted MRI test, this method maintained the accuracy by conventional image matching methods without the local minimum problem.
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Energy Technology Data Exchange (ETDEWEB)
Baak, M., E-mail: max.baak@cern.ch [CERN, CH-1211 Geneva 23 (Switzerland); Gadatsch, S., E-mail: stefan.gadatsch@nikhef.nl [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands); Harrington, R. [School of Physics and Astronomy, University of Edinburgh, Mayfield Road, Edinburgh, EH9 3JZ, Scotland (United Kingdom); Verkerke, W. [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands)
2015-01-21
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
International Nuclear Information System (INIS)
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2014-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
APLIKASI SPLINE ESTIMATOR TERBOBOT
Directory of Open Access Journals (Sweden)
I Nyoman Budiantara
2001-01-01
Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2, ,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2, ,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.
Guan, Jinge; Ren, Wei; Cheng, Yaoyu
2018-04-01
We demonstrate an efficient polarization-difference imaging system in turbid conditions by using the Stokes vector of light. The interaction of scattered light with the polarizer is analyzed by the Stokes-Mueller formalism. An interpolation method is proposed to replace the mechanical rotation of the polarization axis of the analyzer theoretically, and its performance is verified by the experiment at different turbidity levels. We show that compared with direct imaging, the Stokes vector based imaging method can effectively reduce the effect of light scattering and enhance the image contrast.
Nikkhoo, M.; Goli, M.; Najafi Alamdari, M.; Naeimi, M.
2008-05-01
Two-dimensional spectral analysis of spatial data is known as a handy tool for illustrating such data in frequency domain in all earth science disciplines. Conventional methods of spectral analysis (i.e. Fourier method) need an equally spaced data set which is, however, rarely possible in reality. In this paper we developed the least-squares spectral analysis in two dimensions. The method was originally proposed by Vanicek 1969 to be applied to one- dimensional irregularly sampled data. Applying this method to two-dimensional irregularly sampled data also results in an undistorted power spectrum, since during the computation of which, no interpolation process is encountered. As a case study two-dimensional spectrum of GPS leveling data over North America were computed as well as spectrum of Geoid undulations derived from EIGEN-GL04C model. Due to the derived spectra of two data sets, a very good fitness of two is shown in long and medium wavelengths. We also computed the power spectrum of gravity anomalies over North America and compared it with the other ones derived from interpolated data (by different methods). Spectral behavior of these methods is discussed as well.
Directory of Open Access Journals (Sweden)
Iván P. Vizcaíno
2016-11-01
Full Text Available Water quality measurements in rivers are usually performed at intervals of days or months in monitoring campaigns, but little attention has been paid to the spatial and temporal dynamics of those measurements. In this work, we propose scrutinizing the scope and limitations of state-of-the-art interpolation methods aiming to estimate the spatio-temporal dynamics (in terms of trends and structures of relevant variables for water quality analysis usually taken in rivers. We used a database with several water quality measurements from the Machángara River between 2002 and 2007 provided by the Metropolitan Water Company of Quito, Ecuador. This database included flow rate, temperature, dissolved oxygen, and chemical oxygen demand, among other variables. For visualization purposes, the absence of measurements at intermediate points in an irregular spatio-temporal sampling grid was fixed by using deterministic and stochastic interpolation methods, namely, Delaunay and k-Nearest Neighbors (kNN. For data-driven model diagnosis, a study on model residuals was performed comparing the quality of both kinds of approaches. For most variables, a value of k = 15 yielded a reasonable fitting when Mahalanobis distance was used, and water quality variables were better estimated when using the kNN method. The use of kNN provided the best estimation capabilities in the presence of atypical samples in the spatio-temporal dynamics in terms of leave-one-out absolute error, and it was better for variables with slow-changing dynamics, though its performance degraded for variables with fast-changing dynamics. The proposed spatio-temporal analysis of water quality measurements provides relevant and useful information, hence complementing and extending the classical statistical analysis in this field, and our results encourage the search for new methods overcoming the limitations of the analyzed traditional interpolators.
Biomechanical Analysis with Cubic Spline Functions
McLaughlin, Thomas M.; And Others
1977-01-01
Results of experimentation suggest that the cubic spline is a convenient and consistent method for providing an accurate description of displacement-time data and for obtaining the corresponding time derivatives. (MJB)
Directory of Open Access Journals (Sweden)
Mauricio Castro Franco
2017-07-01
Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.
DEFF Research Database (Denmark)
Senjean, Bruno; Knecht, Stefan; Jensen, Hans Jørgen Aa
2015-01-01
Gross-Oliveira-Kohn density-functional theory (GOK-DFT) for ensembles is, in principle, very attractive but has been hard to use in practice. A practical model based on GOK-DFT for the calculation of electronic excitation energies is discussed. The model relies on two modifications of GOK-DFT: use...... equiensembles. It is shown that such a linear interpolation method (LIM) can be rationalized and that it effectively introduces weight dependence effects. As proof of principle, the LIM has been applied to He, Be, and H2 in both equilibrium and stretched geometries as well as the stretched HeH+ molecule. Very...
Comparison of interpolation methods for sparse data: Application to wind and concentration fields
International Nuclear Information System (INIS)
Goodin, W.R.; McRae, G.J.; Seinfield, J.H.
1979-01-01
in order to produce gridded fields of pollutant concentration data and surface wind data for use in an air quality model, a number of techniques for interpolating sparse data values are compared. The techniques are compared using three data sets. One is an idealized concentration distribution to which the exact solution is known, the second is a potential flow field, while the third consists of surface ozone concentrations measured in the Los Angeles Basin on a particular day. The results of the study indicate that fitting a second-degree polynomial to each subregion (triangle) in the plane with each data point weighted according to its distance form the subregion provides a good compromise between accuracy and computational cost
Philipp, Anne; Hittmeir, Sabine; Seibert, Petra
2017-04-01
The distribution of wet deposition as calculated with Lagrangian particle transport models, e.g. FLEXPART (http://flexpart.eu) is governed by the intensity distribution of precipitation. Usually, meteorological input is taken from Eulerian weather forecast models, e.g. ECWMF (European Centre for Medium-Range Weather Forecasts), providing precipitation data as integrated over the time between two output times and over a grid cell. Simple linear interpolation would implicitly assume the integral value to be a point value valid at the grid centre and in the middle of the time interval, and thus underestimate peaks and overestimate local minima. In FLEXPART, a separate pre-processor is used to extract the meteorological input data from the ECMWF archive and prepare them for use in the model. Currently, a relatively simple method prepares the precipitation fields in a way that is consistent with the linear interpolation as applied in FLEXPART. This method is designed to conserve the original amount of precipitation. However, this leads to undesired temporal smoothing of the precipitation time series which even produces nonzero precipitation in dry intervals bordering a precipitation period. A new interpolation algorithm (currently in one dimension) was developed which introduces additional supporting grid points in each time interval (see companion contribution by Hittmeir, Philipp and Seibert). The quality of the algorithm is being tested at first by comparing 1-hourly values derived with the new algorithm from 3- (or 6-)hourly precipitation with the 1-hourly ECMWF model output. As ECWMF provides large-scale and convective precipitation data, the evaluation will be carried out separately as well as for different seasons and climatic zones.
Energy Technology Data Exchange (ETDEWEB)
Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael
2000-04-11
search algorithm to find and delete redundant knots based on the estimation of a weight associated with each basis vector. The overall algorithm iterates by inserting and deleting knots and end up with much fewer knots than pixels to represent the object, while the estimation error is within a certain tolerance. Thus, an efficient reconstruction can be obtained which significantly reduces the complexity of the problem. In this thesis, the adaptive B-Spline method is applied to a cross-well tomography problem. The problem comes from the application of finding underground pollution plumes. Cross-well tomography method is applied by placing arrays of electromagnetic transmitters and receivers along the boundaries of the interested region. By utilizing inverse scattering method, a linear inverse model is set up and furthermore the adaptive B-Spline method described above is applied. The simulation results show that the B-Spline method reduces the dimensional complexity by 90%, compared with that o f a pixel-based method, and decreases time complexity by 50% without significantly degrading the estimation.
Directory of Open Access Journals (Sweden)
J. R. Santillan
2016-09-01
Full Text Available In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS, zig-zag (ZZ, river banks-centerline (RBCL, and river banks-centerline-zig-zag (RBCLZZ, and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs
Directory of Open Access Journals (Sweden)
Shaofeng Wang
2017-05-01
Full Text Available Mineral reserve estimation and mining design depend on a precise modeling of the mineralized deposit. A multi-step interpolation algorithm, including 1D biharmonic spline estimator for interpolating floor altitudes, 2D nearest neighbor, linear, natural neighbor, cubic, biharmonic spline, inverse distance weighted, simple kriging, and ordinary kriging interpolations for grade distribution on the two vertical sections at roadways, and 3D linear interpolation for grade distribution between sections, was proposed to build a 3D grade distribution model of the mineralized seam in a longwall mining panel with a U-shaped layout having two roadways at both sides. Compared to field data from exploratory boreholes, this multi-step interpolation using a natural neighbor method shows an optimal stability and a minimal difference between interpolation and field data. Using this method, the 97,576 m3 of bauxite, in which the mass fraction of Al2O3 (Wa and the mass ratio of Al2O3 to SiO2 (Wa/s are 61.68% and 27.72, respectively, was delimited from the 189,260 m3 mineralized deposit in the 1102 longwall mining panel in the Wachangping mine, Southwest China. The mean absolute errors, the root mean squared errors and the relative standard deviations of errors between interpolated data and exploratory grade data at six boreholes are 2.544, 2.674, and 32.37% of Wa; and 1.761, 1.974, and 67.37% of Wa/s, respectively. The proposed method can be used for characterizing the grade distribution in a mineralized seam between two roadways at both sides of a longwall mining panel.
Straight-sided Spline Optimization
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2011-01-01
Spline connection of shaft and hub is commonly applied when large torque capacity is needed together with the possibility of disassembly. The designs of these splines are generally controlled by different standards. In view of the common use of splines, it seems that few papers deal with splines ...
Interpolation effects in tabulated interatomic potentials
Wen, M.; Whalen, S. M.; Elliott, R. S.; Tadmor, E. B.
2015-10-01
Empirical interatomic potentials are widely used in atomistic simulations due to their ability to compute the total energy and interatomic forces quickly relative to more accurate quantum calculations. The functional forms in these potentials are sometimes stored in a tabulated format, as a collection of data points (argument-value pairs), and a suitable interpolation (often spline-based) is used to obtain the function value at an arbitrary point. We explore the effect of these interpolations on the potential predictions by calculating the quasi-harmonic thermal expansion and finite-temperature elastic constant of a one-dimensional chain compared with molecular dynamics simulations. Our results show that some predictions are affected by the choice of interpolation regardless of the number of tabulated data points. Our results clearly indicate that the interpolation must be considered part of the potential definition, especially for lattice dynamics properties that depend on higher-order derivatives of the potential. This is facilitated by the Knowledgebase of Interatomic Models (KIM) project, in which both the tabulated data (‘parameterized model’) and the code that interpolates them to compute energy and forces (‘model driver’) are stored and given unique citeable identifiers. We have developed cubic and quintic spline model drivers for pair functional type models (EAM, FS, EMT) and uploaded them to the OpenKIM repository (https://openkim.org).
Directory of Open Access Journals (Sweden)
M. Angulo-Martínez
2009-10-01
Full Text Available Rainfall erosivity is a major causal factor of soil erosion, and it is included in many prediction models. Maps of rainfall erosivity indices are required for assessing soil erosion at the regional scale. In this study a comparison is made between several techniques for mapping the rainfall erosivity indices: i the RUSLE R factor and ii the average EI_{30} index of the erosive events over the Ebro basin (NE Spain. A spatially dense precipitation data base with a high temporal resolution (15 min was used. Global, local and geostatistical interpolation techniques were employed to produce maps of the rainfall erosivity indices, as well as mixed methods. To determine the reliability of the maps several goodness-of-fit and error statistics were computed, using a cross-validation scheme, as well as the uncertainty of the predictions, modeled by Gaussian geostatistical simulation. All methods were able to capture the general spatial pattern of both erosivity indices. The semivariogram analysis revealed that spatial autocorrelation only affected at distances of ~15 km around the observatories. Therefore, local interpolation techniques tended to be better overall considering the validation statistics. All models showed high uncertainty, caused by the high variability of rainfall erosivity indices both in time and space, what stresses the importance of having long data series with a dense spatial coverage.
Designing interactively with elastic splines
DEFF Research Database (Denmark)
Brander, David; Bærentzen, Jakob Andreas; Fisker, Ann-Sofie
2018-01-01
We present an algorithm for designing interactively with C1 elastic splines. The idea is to design the elastic spline using a C1 cubic polynomial spline where each polynomial segment is so close to satisfying the Euler-Lagrange equation for elastic curves that the visual difference becomes neglig...... negligible. Using a database of cubic Bézier curves we are able to interactively modify the cubic spline such that it remains visually close to an elastic spline....
Flexible regression models with cubic splines.
Durrleman, S; Simon, R
1989-05-01
We describe the use of cubic splines in regression models to represent the relationship between the response variable and a vector of covariates. This simple method can help prevent the problems that result from inappropriate linearity assumptions. We compare restricted cubic spline regression to non-parametric procedures for characterizing the relationship between age and survival in the Stanford Heart Transplant data. We also provide an illustrative example in cancer therapeutics.
P-Splines Using Derivative Information
Calderon, Christopher P.
2010-01-01
Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.
Directory of Open Access Journals (Sweden)
S. Kaewumpai
2015-07-01
Full Text Available Meshless method choosing Heaviside step function as a test function for solving simply supported thin plates under various loads is presented in this paper. The shape functions using regular and irregular nodal distribution as well as order of polynomial basis choice are constructed by moving kriging interpolation. Alternatively, two-field-variable local weak forms are used in order to decompose the governing equation, biharmonic equation, into a couple of Poisson equations and then impose straightforward boundary conditions. Selected numerical examples are considered to examine the applicability, the easiness, and the accuracy of the proposed method. Comparing to an exact solution, this robust method gives significantly accurate numerical results, implementing by maximum relative error and root mean square relative error.
Directory of Open Access Journals (Sweden)
Felix Fritzen
2018-02-01
Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.
Simple monotonic interpolation scheme
International Nuclear Information System (INIS)
Greene, N.M.
1980-01-01
A procedure for presenting tabular data, such as are contained in the ENDF/B files, that is simpler, more general, and potentially much more compact than the present schemes used with ENDF/B is presented. The method has been successfully used for Bondarenko interpolation in a module of the AMPX system. 1 figure, 1 table
Prasetyo, S. Y. J.; Hartomo, K. D.
2018-01-01
The Spatial Plan of the Province of Central Java 2009-2029 identifies that most regencies or cities in Central Java Province are very vulnerable to landslide disaster. The data are also supported by other data from Indonesian Disaster Risk Index (In Indonesia called Indeks Risiko Bencana Indonesia) 2013 that suggest that some areas in Central Java Province exhibit a high risk of natural disasters. This research aims to develop an application architecture and analysis methodology in GIS to predict and to map rainfall distribution. We propose our GIS architectural application of “Multiplatform Architectural Spatiotemporal” and data analysis methods of “Triple Exponential Smoothing” and “Spatial Interpolation” as our significant scientific contribution. This research consists of 2 (two) parts, namely attribute data prediction using TES method and spatial data prediction using Inverse Distance Weight (IDW) method. We conduct our research in 19 subdistricts in the Boyolali Regency, Central Java Province, Indonesia. Our main research data is the biweekly rainfall data in 2000-2016 Climatology, Meteorology, and Geophysics Agency (In Indonesia called Badan Meteorologi, Klimatologi, dan Geofisika) of Central Java Province and Laboratory of Plant Disease Observations Region V Surakarta, Central Java. The application architecture and analytical methodology of “Multiplatform Architectural Spatiotemporal” and spatial data analysis methodology of “Triple Exponential Smoothing” and “Spatial Interpolation” can be developed as a GIS application framework of rainfall distribution for various applied fields. The comparison between the TES and IDW methods show that relative to time series prediction, spatial interpolation exhibit values that are approaching actual. Spatial interpolation is closer to actual data because computed values are the rainfall data of the nearest location or the neighbour of sample values. However, the IDW’s main weakness is that some
Laksâ, Arne
2015-11-01
B-splines are the de facto industrial standard for surface modelling in Computer Aided design. It is comparable to bend flexible rods of wood or metal. A flexible rod minimize the energy when bending, a third degree polynomial spline curve minimize the second derivatives. B-spline is a nice way of representing polynomial splines, it connect polynomial splines to corner cutting techniques, which induces many nice and useful properties. However, the B-spline representation can be expanded to something we can call general B-splines, i.e. both polynomial and non-polynomial splines. We will show how this expansion can be done, and the properties it induces, and examples of non-polynomial B-spline.
Jin, Tao; Chen, Yiyang; Flesch, Rodolfo C. C.
2017-11-01
Harmonics pose a great threat to safe and economical operation of power grids. Therefore, it is critical to detect harmonic parameters accurately to design harmonic compensation equipment. The fast Fourier transform (FFT) is widely used for electrical popular power harmonics analysis. However, the barrier effect produced by the algorithm itself and spectrum leakage caused by asynchronous sampling often affects the harmonic analysis accuracy. This paper examines a new approach for harmonic analysis based on deducing the modifier formulas of frequency, phase angle, and amplitude, utilizing the Nuttall-Kaiser window double spectrum line interpolation method, which overcomes the shortcomings in traditional FFT harmonic calculations. The proposed approach is verified numerically and experimentally to be accurate and reliable.
Higher-order numerical solutions using cubic splines
Rubin, S. G.; Khosla, P. K.
1976-01-01
A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.
International Nuclear Information System (INIS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-01-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m 2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R 2 ), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested
Energy Technology Data Exchange (ETDEWEB)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
Novel method of interpolation and extrapolation of functions by a linear initial value problem
CSIR Research Space (South Africa)
Shatalov, M
2008-09-01
Full Text Available A novel method of function approximation using an initial value, linear, ordinary differential equation (ODE) is presented. The main advantage of this method is to obtain the approximation expressions in a closed form. This technique can be taught...
International Nuclear Information System (INIS)
Cheng, C.Z.
1988-12-01
A nonvariational ideal MHD stability code (NOVA) has been developed. In a general flux coordinate (/psi/, θ, /zeta/) system with an arbitrary Jacobian, the NOVA code employs Fourier expansions in the generalized poloidal angle θ and generalized toroidal angle /zeta/ directions, and cubic-B spline finite elements in the radial /psi/ direction. Extensive comparisons with these variational ideal MHD codes show that the NOVA code converges faster and gives more accurate results. An extended version of NOVA is developed to integrate non-Hermitian eigenmode equations due to energetic particles. The set of non-Hermitian integro-differential eigenmode equations is numerically solved by the NOVA-K code. We have studied the problems of the stabilization of ideal MHD internal kink modes by hot particle pressure and the excitation of ''fishbone'' internal kink modes by resonating with the energetic particle magnetic drift frequency. Comparisons with analytical solutions show that the values of the critical β/sub h/ from the analytical theory can be an order of magnitude different from those computed by the NOVA-K code. 24 refs., 11 figs., 1 tab
Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI
Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.
2015-01-01
Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085
C2-rational cubic spline involving tension parameters
Indian Academy of Sciences (India)
the impact of variation of parameters ri and ti on the shape of the interpolant. Some remarks are given in x 6. 2. The rational spline interpolant. Let P И fxign. iИ1 where a И x1 ` x2 ` ┴┴┴ ` xn И b, be a partition of the interval ЙaY bК, let fi, i И 1Y ... Y n be the function values at the data points. We set hi И xiЗ1 └ xiY ∆i И Е ...
BIMOND3, Monotone Bivariate Interpolation
International Nuclear Information System (INIS)
Fritsch, F.N.; Carlson, R.E.
2001-01-01
1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data
On Characterization of Quadratic Splines
DEFF Research Database (Denmark)
Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong
2005-01-01
A quadratic spline is a differentiable piecewise quadratic function. Many problems in numerical analysis and optimization literature can be reformulated as unconstrained minimizations of quadratic splines. However, only special cases of quadratic splines are studied in the existing literature...... between the convexity of a quadratic spline function and the monotonicity of the corresponding LCP problem. It is shown that, although both conditions lead to easy solvability of the problem, they are different in general......., and algorithms are developed on a case by case basis. There lacks an analytical representation of a general or even a convex quadratic spline. The current paper fills this gap by providing an analytical representation of a general quadratic spline. Furthermore, for convex quadratic spline, it is shown...
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Directory of Open Access Journals (Sweden)
S. Wüst
2017-09-01
Full Text Available Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals – the subtraction of the spline from the original time series – are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Scripted Bodies and Spline Driven Animation
DEFF Research Database (Denmark)
Erleben, Kenny; Henriksen, Knud
2002-01-01
In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain of the sp......In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2007-01-01
A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2003-01-01
A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.
Geometric optical transfer function and tis computation method
International Nuclear Information System (INIS)
Wang Qi
1992-01-01
Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation
Li, Lixin; Tian, Jie; Zhang, Xingyou; Holt, James B; Piltner, Reinhard
2012-01-01
This paper investigates spatiotemporal interpolation methods for the application of air pollution assessment. The air pollutant of interest in this paper is fine particulate matter PM 2.5 . The choice of the time scale is investigated when applying the shape function-based method. It is found that the measurement scale of the time dimension has an impact on the quality of interpolation results. Based upon the result of 10-fold cross validation, the most effective time scale out of four experimental ones was selected for the PM 2.5 interpolation. The paper also estimates the population exposure to the ambient air pollution of PM 2.5 at the county-level in the contiguous U.S. in 2009. The interpolated county-level PM 2.5 has been linked to 2009 population data and the population with a risky PM 2.5 exposure has been estimated. The risky PM 2.5 exposure means the PM 2.5 concentration exceeding the National Ambient Air Quality Standards. The geographic distribution of the counties with a risky PM 2.5 exposure is visualized. This work is essential to understanding the associations between ambient air pollution exposure and population health outcomes.
A smoothing algorithm using cubic spline functions
Smith, R. E., Jr.; Price, J. M.; Howser, L. M.
1974-01-01
Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.
About a family of C2 splines with one free generating function
Directory of Open Access Journals (Sweden)
Igor Verlan
2005-01-01
Full Text Available The problem of interpolation of discrete set of data on the interval [a, b] representing the function f is investigated. A family of C*C splines with one free generating function is introduced in order to solve this problem. Cubic C*C splines belong to this family. The required conditions which must satisfy the generating function in order to obtain explicit interpolants are presented and examples of generating functions are given. Mathematics Subject Classification: 2000: 65D05, 65D07, 41A05, 41A15.
Calculation of electromagnetic parameter based on interpolation algorithm
International Nuclear Information System (INIS)
Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan
2015-01-01
Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment
International Nuclear Information System (INIS)
Zhang Yuexia; Liu Qiang; Shi Tingyun
2012-01-01
An accurate one-centre method is here applied to the calculation of the equilibrium distances and the energies for the hydrogen molecular ion in magnetic fields ranging from 10 9 G to 4.414 × 10 13 G. Both the radial and angular wavefunctions were expanded in terms of optimization B-splines. The slow convergence problem in the general one-centre method and singularities at the nuclear positions of the H + 2 were solved well. The accuracy of the one-centre method has been improved in this way. We compared our results with those generated by high-precision methods from published studies. Equilibrium distances of the 1σ g,u , 1π g,u , 1δ g,u and 2σ g states of the H + 2 in strong magnetic fields were found to be accurate to three to four significant digits at least up to 2.35 × 10 12 G, even for the antibonding states 1σ u , 1π g and 1δ u , whose equilibrium distances R eq are very large. (paper)
International Nuclear Information System (INIS)
Li-Min, Ma; Zong-Min, Wu
2010-01-01
In this paper, we use a kind of univariate multiquadric quasi-interpolation to solve a parabolic equation with overspecified data, which has arisen in many physical phenomena. We obtain the numerical scheme by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a simple forward difference to approximate the temporal derivative of the dependent variable. The advantage of the presented scheme is that the algorithm is very simple so it is very easy to implement. The results of the numerical experiment are presented and are compared with the exact solution to confirm the good accuracy of the presented scheme. (general)
Directory of Open Access Journals (Sweden)
Chuanfa Chen
2015-03-01
Full Text Available Remote-sensing-derived elevation data sets often suffer from noise and outliers due to various reasons, such as the physical limitations of sensors, multiple reflectance, occlusions and low contrast of texture. Outliers generally have a seriously negative effect on DEM construction. Some interpolation methods like ordinary kriging (OK are capable of smoothing noise inherent in sample points, but are sensitive to outliers. In this paper, a robust algorithm of multiquadric method (MQ based on an Improved Huber loss function (MQ-IH has been developed to decrease the impact of outliers on DEM construction. Theoretically, the improved Huber loss function is null for outliers, quadratic for small errors, and linear for others. Simulated data sets drawn from a mathematical surface with different error distributions were employed to analyze the robustness of MQ-IH. Results indicate that MQ-IH obtains a good balance between efficiency and robustness. Namely, the performance of MQ-IH is comparative to those of the classical MQ and MQ based on the Classical Huber loss function (MQ-CH when sample points follow a normal distribution, and the former outperforms the latter two when sample points are subject to outliers. For example, for the Cauchy error distribution with the location parameter of 0 and scale parameter of 1, the root mean square errors (RMSEs of MQ-CH and the classical MQ are 0.3916 and 1.4591, respectively, whereas that of MQ-IH is 0.3698. The performance of MQ-IH is further evaluated by qualitative and quantitative analysis through a real-world example of DEM construction with the stereo-images-derived elevation points. Results demonstrate that compared with the classical interpolation methods, including natural neighbor (NN, OK and ANUDEM (a program that calculates regular grid digital elevation models (DEMs with sensible shape and drainage structure from arbitrarily large topographic data sets, and two versions of MQ, including the
Hilbertian kernels and spline functions
Atteia, M
1992-01-01
In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.
Zhang, J.; Liu, Q.; Li, X.; Niu, H.; Cai, E.
2015-12-01
In recent years, wireless sensor network (WSN) emerges to collect Earth observation data at relatively low cost and light labor load, while its observations are still point-data. To learn the spatial distribution of a land surface parameter, interpolating the point data is necessary. Taking soil moisture (SM) for example, its spatial distribution is critical information for agriculture management, hydrological and ecological researches. This study developed a method to interpolate the WSN-measured SM to acquire the spatial distribution in a 5km*5km study area, located in the middle reaches of HEIHE River, western China. As SM is related to many factors such as topology, soil type, vegetation and etc., even the WSN observation grid is not dense enough to reflect the SM distribution pattern. Our idea is to revise the traditional Kriging algorithm, introducing spectral variables, i.e., vegetation index (VI) and abledo, from satellite imagery as supplementary information to aid the interpolation. Thus, the new Extended-Kriging algorithm operates on the spatial & spectral combined space. To run the algorithm, first we need to estimate the SM variance function, which is also extended to the combined space. As the number of WSN samples in the study area is not enough to gather robust statistics, we have to assume that the SM variance function is invariant over time. So, the variance function is estimated from a SM map, derived from the airborne CASI/TASI images acquired in July 10, 2012, and then applied to interpolate WSN data in that season. Data analysis indicates that the new algorithm can provide more details to the variation of land SM. Then, the Leave-one-out cross-validation is adopted to estimate the interpolation accuracy. Although a reasonable accuracy can be achieved, the result is not yet satisfactory. Besides improving the algorithm, the uncertainties in WSN measurements may also need to be controlled in our further work.
Sixtus, Frederick
2009-01-01
Inhalt: Interpol - Kurzer geschichtlicher Abriss - Interpol heute - Struktur - Die Kernfunktionen Interpols Europol (oder: Europäisches Polizeiamt) - Kurzer geschichtlicher Abriss - Europol heute - Struktur Die Kontrolle Europols - Die Kernaufgaben Europols - Wie arbeiten die internationalen Polizeibehörden tatsächlich? - Vorboten einer Weltpolizei?
Directory of Open Access Journals (Sweden)
A. M. Novikova
2016-01-01
Full Text Available In the article the urgency of modern methods’ active use in oceanographic data spatial analysis from the perspective of geo-information and geostatistical approaches is approved. There are analyzed the possibilities of some open GIS QGIS statistical modules for practical problems of data quality rapid assessment solution. The quality of QGIS interpolation modules is estimated representing methods of kriging and radial basis functions (splines by using an array of low-security data.
Sahabiev, I. A.; Ryazanov, S. S.; Kolcova, T. G.; Grigoryan, B. R.
2018-03-01
The three most common techniques to interpolate soil properties at a field scale—ordinary kriging (OK), regression kriging with multiple linear regression drift model (RK + MLR), and regression kriging with principal component regression drift model (RK + PCR)—were examined. The results of the performed study were compiled into an algorithm of choosing the most appropriate soil mapping technique. Relief attributes were used as the auxiliary variables. When spatial dependence of a target variable was strong, the OK method showed more accurate interpolation results, and the inclusion of the auxiliary data resulted in an insignificant improvement in prediction accuracy. According to the algorithm, the RK + PCR method effectively eliminates multicollinearity of explanatory variables. However, if the number of predictors is less than ten, the probability of multicollinearity is reduced, and application of the PCR becomes irrational. In that case, the multiple linear regression should be used instead.
Efficient computation of smoothing splines via adaptive basis sampling
Ma, Ping
2015-06-24
© 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n^{3}). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.
Detrending of non-stationary noise data by spline techniques
International Nuclear Information System (INIS)
Behringer, K.
1989-11-01
An off-line method for detrending non-stationary noise data has been investigated. It uses a least squares spline approximation of the noise data with equally spaced breakpoints. Subtraction of the spline approximation from the noise signal at each data point gives a residual noise signal. The method acts as a high-pass filter with very sharp frequency cutoff. The cutoff frequency is determined by the breakpoint distance. The steepness of the cutoff is controlled by the spline order. (author) 12 figs., 1 tab., 5 refs
International Nuclear Information System (INIS)
Ivashchenko, V.I.
1990-01-01
A modified variant of LCAO-interpolation scheme for the calculation of electronic spectra of transition metal compounds with NaCl-type structure has been suggested. By the method of coherent potential in combination with LCAO-interpolation scheme partial densities of states and X-ray K β 5 - and L 111 -spectra of metal in TiC x (x=1.0; 0.9; 0.8; 0.7) and VC x (x=1.0; 0.88; 0.8; 0.73) have been calculated. The influence of carbon vacancies on electronic spectrum and shape of X-ray emission bands of titanium and vanadium carbides has been studied. The data obtained are compared with calculation results of other authors and experimental characteristics
Local Convexity-Preserving C 2 Rational Cubic Spline for Convex Data
Abd Majid, Ahmad; Ali, Jamaludin Md.
2014-01-01
We present the smooth and visually pleasant display of 2D data when it is convex, which is contribution towards the improvements over existing methods. This improvement can be used to get the more accurate results. An attempt has been made in order to develop the local convexity-preserving interpolant for convex data using C 2 rational cubic spline. It involves three families of shape parameters in its representation. Data dependent sufficient constraints are imposed on single shape parameter to conserve the inherited shape feature of data. Remaining two of these shape parameters are used for the modification of convex curve to get a visually pleasing curve according to industrial demand. The scheme is tested through several numerical examples, showing that the scheme is local, computationally economical, and visually pleasing. PMID:24757421
Interpolation of diffusion weighted imaging datasets
DEFF Research Database (Denmark)
Dyrby, Tim B; Lundell, Henrik; Burke, Mark W
2014-01-01
by the interpolation method used should be considered. The results indicate that conventional interpolation methods can be successfully applied to DWI datasets for mining anatomical details that are normally seen only at higher resolutions, which will aid in tractography and microstructural mapping of tissue...
An adaptive interpolation scheme for molecular potential energy surfaces
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
Optimization of straight-sided spline design
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2011-01-01
Spline connection of shaft and hub is commonly applied when large torque capacity is needed together with the possibility of disassembly. The designs of these splines are generally controlled by different standards. In view of the common use of splines, it seems that few papers deal with splines ...
Directory of Open Access Journals (Sweden)
Kalle Remm
2011-08-01
Full Text Available Maps of the long-term mean precipitation involving local landscape variables were generated for the Baltic countries, and the effectiveness of seven modelling methods was compared. The precipitation data were recorded in 245 meteorological stations in 1966–2005, and 51 location-related explanatory variables were used. The similarity-based reasoning in the Constud software system outperformed other methods according to the validation fit, except for spring. The multivariate adaptive regression splines (MARS was another effective method on average. The inclusion of landscape variables, compared to reverse distance-weighted interpolation, highlights the effect of uplands, larger water bodies and forested areas. The long-term mean amount of precipitation, calculated as the station average, probably underestimates the real value for Estonia and overestimates it for Lithuania due to the uneven distribution of observation stations.
Monotone piecewise bicubic interpolation
International Nuclear Information System (INIS)
Carlson, R.E.; Fritsch, F.N.
1985-01-01
In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables
Application of multivariate splines to discrete mathematics
Xu, Zhiqiang
2005-01-01
Using methods developed in multivariate splines, we present an explicit formula for discrete truncated powers, which are defined as the number of non-negative integer solutions of linear Diophantine equations. We further use the formula to study some classical problems in discrete mathematics as follows. First, we extend the partition function of integers in number theory. Second, we exploit the relation between the relative volume of convex polytopes and multivariate truncated powers and giv...
Directory of Open Access Journals (Sweden)
Nikesh S. Dattani
2012-03-01
Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.
Linear interpolation of histograms
Read, A L
1999-01-01
A prescription is defined for the interpolation of probability distributions that are assumed to have a linear dependence on a parameter of the distributions. The distributions may be in the form of continuous functions or histograms. The prescription is based on the weighted mean of the inverses of the cumulative distributions between which the interpolation is made. The result is particularly elegant for a certain class of distributions, including the normal and exponential distributions, and is useful for the interpolation of Monte Carlo simulation results which are time-consuming to obtain.
Integration and interpolation of sampled waveforms
International Nuclear Information System (INIS)
Stearns, S.D.
1978-01-01
Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed
USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES
Directory of Open Access Journals (Sweden)
Otgon Cristian
2011-07-01
Full Text Available The paper aims to approach innovative financial instruments for the management of public resources. In the category of these innovative tools have been included polynomial spline functions used for budgetary sizing in the substantiating of fiscal and budgetary policies. In order to use polynomial spline functions there have been made a number of steps consisted in the establishment of nodes, the calculation of specific coefficients corresponding to the spline functions, development and determination of errors of approximation. Also in this paper was done extrapolation of series of property tax data using polynomial spline functions of order I. For spline impelementation were taken two series of data, one reffering to property tax as a resultative variable and the second one reffering to building tax, resulting a correlation indicator R=0,95. Moreover the calculation of spline functions are easy to solve and due to small errors of approximation have a great power of predictibility, much better than using ordinary least squares method. In order to realise the research there have been used as methods of research several steps, namely observation, series of data construction and processing the data with spline functions. The data construction is a daily series gathered from the budget account, reffering to building tax and property tax. The added value of this paper is given by the possibility of avoiding deficits by using spline functions as innovative instruments in the publlic finance, the original contribution is made by the average of splines resulted from the series of data. The research results lead to conclusion that the polynomial spline functions are recommended to form the elaboration of fiscal and budgetary policies, due to relatively small errors obtained in the extrapolation of economic processes and phenomena. Future research directions are taking in consideration to study the polynomial spline functions of second-order, third
Directory of Open Access Journals (Sweden)
Marcelo Curtarelli
2015-02-01
Full Text Available The generation of reliable information for improving the understanding of hydroelectric reservoir dynamics is fundamental for guiding decision-makers to implement best management practices. In this way, we assessed the performance of different interpolation algorithms to map the bathymetry of the Tucuruí hydroelectric reservoir, located in the Brazilian Amazon, as an aid to manage and operate Amazonian reservoirs. We evaluated three different deterministic and one geostatistical algorithms. The performance of the algorithms was assessed through cross-validation and Monte Carlo Simulation. Finally, operational information was derived from the bathymetric grid with the best performance. The results showed that all interpolation methods were able to map important bathymetric features. The best performance was obtained with the geostatistical method (RMSE = 0.92 m. The information derived from the bathymetric map (e.g., the level-area and level-volume diagram and the three-dimensional grid will allow for optimization of operational monitoring of the Tucuruí hydroelectric reservoir as well as the development of three-dimensional modeling studies.
Extension Of Lagrange Interpolation
Directory of Open Access Journals (Sweden)
Mousa Makey Krady
2015-01-01
Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.
Energy Technology Data Exchange (ETDEWEB)
Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)
2016-01-15
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Creasy, Arch; Barker, Gregory; Carta, Giorgio
2017-03-01
A methodology is presented to predict protein elution behavior from an ion exchange column using both individual or combined pH and salt gradients based on high-throughput batch isotherm data. The buffer compositions are first optimized to generate linear pH gradients from pH 5.5 to 7 with defined concentrations of sodium chloride. Next, high-throughput batch isotherm data are collected for a monoclonal antibody on the cation exchange resin POROS XS over a range of protein concentrations, salt concentrations, and solution pH. Finally, a previously developed empirical interpolation (EI) method is extended to describe protein binding as a function of the protein and salt concentration and solution pH without using an explicit isotherm model. The interpolated isotherm data are then used with a lumped kinetic model to predict the protein elution behavior. Experimental results obtained for laboratory scale columns show excellent agreement with the predicted elution curves for both individual or combined pH and salt gradients at protein loads up to 45 mg/mL of column. Numerical studies show that the model predictions are robust as long as the isotherm data cover the range of mobile phase compositions where the protein actually elutes from the column. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Application of Hardy's multiquadric interpolation to hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Kansa, E.J.
1985-10-01
Hardy's multiquadric interpolation (MQI) scheme is a global, continuously differentiable interpolation method for solving scattered data interpolation problems. It is capable of producing monotonic, extremely accurate interpolating functions, integrals, and derivatives. Derivative estimates for a variety of one and two-dimensional surfaces were obtained. MQI was then applied to the spherical blast wave problem of von Neumann. The numerical solution agreed extremely well with the exact solution. 17 refs., 3 figs., 2 tabs.
Directory of Open Access Journals (Sweden)
Shulun Liu
2018-01-01
Full Text Available Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW, nearest neighbors (NN, linear spline (LN, and ordinary Kriging (OK, were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In terms of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations
B-spline solution of a singularly perturbed boundary value problem arising in biology
International Nuclear Information System (INIS)
Lin Bin; Li Kaitai; Cheng Zhengxing
2009-01-01
We use B-spline functions to develop a numerical method for solving a singularly perturbed boundary value problem associated with biology science. We use B-spline collocation method, which leads to a tridiagonal linear system. The accuracy of the proposed method is demonstrated by test problems. The numerical result is found in good agreement with exact solution.
A Blossoming Development of Splines
Mann, Stephen
2006-01-01
In this lecture, we study Bezier and B-spline curves and surfaces, mathematical representations for free-form curves and surfaces that are common in CAD systems and are used to design aircraft and automobiles, as well as in modeling packages used by the computer animation industry. Bezier/B-splines represent polynomials and piecewise polynomials in a geometric manner using sets of control points that define the shape of the surface. The primary analysis tool used in this lecture is blossoming, which gives an elegant labeling of the control points that allows us to analyze their properties geom
Traffic volume estimation using network interpolation techniques.
2013-12-01
Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...
Energy Technology Data Exchange (ETDEWEB)
Penteado, Miguel Suarez Xavier [Pos-Graduacao em Agronomia - Energia na Agricultura, FCA UNESP - Botucatu, SP (Brazil), Dept. de Recursos Naturais], e-mail: miguel_penteado@fca.unesp.br; Escobedo, Joao Francisco [Dept. de Recursos Naturais, FCA/UNESP, Botucatu, SP (Brazil)], e-mail: escobedo@fca.unesp.br; Dal Pai, Alexandre [Faculdade de Tecnologia de Botucatu - FATEC, Botucatu, SP (Brazil)], e-mail: adalpai@fatecbt.edu.br
2011-07-01
This work explores the suitability of the Lagrange interpolating polynomial as a tool to estimate and correct solar databases. From the knowledge of the irradiance distribution over a day, portion of it was removed for applying Lagrange interpolation polynomial. After generation of the estimates by interpolation, the assessment was made by MBE and Rms statistical indicators. The application of Lagrange interpolating generated the following results: underestimation of 0.27% (MBE = -1.83 W/m{sup 2}) and scattering of 0.51% (Rms = 3.48 W/m{sup 2}). (author)
Symmetric, discrete fractional splines and Gabor systems
DEFF Research Database (Denmark)
Søndergaard, Peter Lempel
2006-01-01
In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the continu......In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing...... the continuous splines, and one is a truly finite, discrete construction. We discuss the properties of these splines and their usefulness as windows for Gabor frames and Wilson bases....
Isogeometric analysis using T-splines
Bazilevs, Yuri
2010-01-01
We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
Cubic spline functions for curve fitting
Young, J. D.
1972-01-01
FORTRAN cubic spline routine mathematically fits curve through given ordered set of points so that fitted curve nearly approximates curve generated by passing infinite thin spline through set of points. Generalized formulation includes trigonometric, hyperbolic, and damped cubic spline fits of third order.
Interferometric interpolation of sparse marine data
Hanafy, Sherif M.
2013-10-11
We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.
A Multidimensional Spline Based Global Nonlinear Aerodynamic Model for the Cessna Citation II
De Visser, C.C.; Mulder, J.A.
2010-01-01
A new method is proposed for the identification of global nonlinear models of aircraft non-dimensional force and moment coefficients. The method is based on a recent type of multivariate spline, the multivariate simplex spline, which can accurately approximate very large, scattered nonlinear
Directory of Open Access Journals (Sweden)
Mengmeng Wang
2017-12-01
Full Text Available Near surface air temperature (NSAT is a primary descriptor of terrestrial environmental conditions. In recent decades, many efforts have been made to develop various methods for obtaining spatially continuous NSAT from gauge or station observations. This study compared three spatial interpolation (i.e., Kriging, Spline, and Inversion Distance Weighting (IDW and two regression analysis (i.e., Multiple Linear Regression (MLR and Geographically Weighted Regression (GWR models for predicting monthly minimum, mean, and maximum NSAT in China, a domain with a large area, complex topography, and highly variable station density. This was conducted for a period of 12 months of 2010. The accuracy of the GWR model is better than the MLR model with an improvement of about 3 °C in the Root Mean Squared Error (RMSE, which indicates that the GWR model is more suitable for predicting monthly NSAT than the MLR model over a large scale. For three spatial interpolation models, the RMSEs of the predicted monthly NSAT are greater in the warmer months, and the mean RMSEs of the predicted monthly mean NSAT for 12 months in 2010 are 1.56 °C for the Kriging model, 1.74 °C for the IDW model, and 2.39 °C for the Spline model, respectively. The GWR model is better than the Kriging model in the warmer months, while the Kriging model is superior to the GWR model in the colder months. The total precision of the GWR model is slightly higher than the Kriging model. The assessment result indicated that the higher standard deviation and the lower mean of NSAT from sample data would be associated with a better performance of predicting monthly NSAT using spatial interpolation models.
Directory of Open Access Journals (Sweden)
Xihua Yang
2015-01-01
Full Text Available This paper presents spatial interpolation techniques to produce finer-scale daily rainfall data from regional climate modeling. Four common interpolation techniques (ANUDEM, Spline, IDW, and Kriging were compared and assessed against station rainfall data and modeled rainfall. The performance was assessed by the mean absolute error (MAE, mean relative error (MRE, root mean squared error (RMSE, and the spatial and temporal distributions. The results indicate that Inverse Distance Weighting (IDW method is slightly better than the other three methods and it is also easy to implement in a geographic information system (GIS. The IDW method was then used to produce forty-year (1990–2009 and 2040–2059 time series rainfall data at daily, monthly, and annual time scales at a ground resolution of 100 m for the Greater Sydney Region (GSR. The downscaled daily rainfall data have been further utilized to predict rainfall erosivity and soil erosion risk and their future changes in GSR to support assessments and planning of climate change impact and adaptation in local scale.
Connecting the Dots Parametrically: An Alternative to Cubic Splines.
Hildebrand, Wilbur J.
1990-01-01
Discusses a method of cubic splines to determine a curve through a series of points and a second method for obtaining parametric equations for a smooth curve that passes through a sequence of points. Procedures for determining the curves and results of each of the methods are compared. (YP)
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Interpolation of uniformly absolutely continuous operators
Czech Academy of Sciences Publication Activity Database
Cobos, F.; Gogatishvili, Amiran; Opic, B.; Pick, L.
2013-01-01
Roč. 286, 5-6 (2013), s. 579-599 ISSN 0025-584X R&D Projects: GA ČR GA201/08/0383 Institutional support: RVO:67985840 Keywords : uniformly absolutely continuous operators * interpolation * type of an interpolation method Subject RIV: BA - General Mathematics Impact factor: 0.658, year: 2013 http://onlinelibrary.wiley.com/doi/10.1002/ mana .201100205/full
Image Interpolation Scheme based on SVM and Improved PSO
Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.
2018-01-01
In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.
Suparta, Wayan; Rahman, Rosnani
2016-02-01
Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high
Directory of Open Access Journals (Sweden)
Marcelo R. Viola
2010-09-01
evaluate the performance that was conducted on the basis of absolute mean error. In addition, a digital elevation model, with a resolution of 270 m, was applied. The interpolators have shown good performance, with mean errors varying from 12.84 to 19.96%, the co-kriging method presenting a smaller absolute mean error in 50% of the situations evaluated.
Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs
Howell, Lauren R.; Allen, B. Danette
2016-01-01
A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.
Momentum analysis by using a quintic spline model for the track
Wind, H
1974-01-01
A method is described to determine the momentum of a particle when the (inhomogeneous) analysing magnetic field and the position of at least three points on the track are known. The model of the field is essentially a cubic spline and that of the track a quintic spline. (8 refs).
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
A FAST MORPHING-BASED INTERPOLATION FOR MEDICAL IMAGES: APPLICATION TO CONFORMAL RADIOTHERAPY
Directory of Open Access Journals (Sweden)
Hussein Atoui
2011-05-01
Full Text Available A method is presented for fast interpolation between medical images. The method is intended for both slice and projective interpolation. It allows offline interpolation between neighboring slices in tomographic data. Spatial correspondence between adjacent images is established using a block matching algorithm. Interpolation of image intensities is then carried out by morphing between the images. The morphing-based method is compared to standard linear interpolation, block-matching-based interpolation and registrationbased interpolation in 3D tomographic data sets. Results show that the proposed method scored similar performance in comparison to registration-based interpolation, and significantly outperforms both linear and block-matching-based interpolation. This method is applied in the context of conformal radiotherapy for online projective interpolation between Digitally Reconstructed Radiographs (DRRs.
Piecewise linear regression splines with hyperbolic covariates
International Nuclear Information System (INIS)
Cologne, John B.; Sposto, Richard
1992-09-01
Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)
Directory of Open Access Journals (Sweden)
M. H. Nazarifar
2014-01-01
Full Text Available Water is the main constraint for production of agricultural crops. The temporal and spatial variations in water requirement for agriculture products are limiting factors in the study of optimum use of water resources in regional planning and management. However, due to unfavorable distribution and density of meteorological stations, it is not possible to monitor the regional variations precisely. Therefore, there is a need to estimate the evapotranspiration of crops at places where meteorological data are not available and then extend the findings from points of measurements to regional scale. Geostatistical methods are among those methods that can be used for estimation of evapotranspiration at regional scale. The present study attempts to investigate different geostatistical methods for temporal and spatial estimation of water requirements for wheat crop in different periods. The study employs the data provided by 16 synoptic and climatology meteorological stations in Hamadan province in Iran. Evapotranspiration for each month and for the growth period were determined using Penman-Mantis and Torrent-White methods for different water periods based on Standardized Precipitation Index (SPI. Among the available geostatistical methods, three methods: Kriging Method, Cokriging Method, and inverse weighted distance were selected, and analyzed, using GS+ software. Analysis and selection of the suitable geostatistical method were performed based on two measures, namely Mean Absolute Error (MAE and Mean Bias Error (MBE. The findings suggest that, in general, during the drought period, Kriging method is the proper one for estimating water requirements for the six months: January, February, April, May, August, and December. However, weighted moving average is a better estimation method for the months March, June, September, and October. In addition, Kriging is the best method for July. In normal conditions, Kriging is suitable for April, August, December
A cubic spline approximation for problems in fluid mechanics
Rubin, S. G.; Graves, R. A., Jr.
1975-01-01
A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.
Viscous flow solutions with a cubic spline approximation
Rubin, S. G.; Graves, R. A., Jr.
1975-01-01
A cubic spline approximation is used for the solution of several problems in fluid mechanics. This procedure provides a high degree of accuracy even with a nonuniform mesh, and leads to a more accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several typical integration schemes are presented. For two-dimensional flows a spline-alternating-direction-implicit (SADI) method is evaluated. The spline procedure is assessed and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.
Directory of Open Access Journals (Sweden)
Oussama eAbdoun
2011-01-01
Full Text Available A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA technology. Indeed, high-density MEAs provide large-scale covering (several mm² of whole neural structures combined with microscopic resolution (about 50µm of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid deformation based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License (GPL and available at http://sites.google.com/site/neuromapsoftware.
Thamareerat, N; Luadsong, A; Aschariyaphotha, N
2016-01-01
In this paper, we present a numerical scheme used to solve the nonlinear time fractional Navier-Stokes equations in two dimensions. We first employ the meshless local Petrov-Galerkin (MLPG) method based on a local weak formulation to form the system of discretized equations and then we will approximate the time fractional derivative interpreted in the sense of Caputo by a simple quadrature formula. The moving Kriging interpolation which possesses the Kronecker delta property is applied to construct shape functions. This research aims to extend and develop further the applicability of the truly MLPG method to the generalized incompressible Navier-Stokes equations. Two numerical examples are provided to illustrate the accuracy and efficiency of the proposed algorithm. Very good agreement between the numerically and analytically computed solutions can be observed in the verification. The present MLPG method has proved its efficiency and reliability for solving the two-dimensional time fractional Navier-Stokes equations arising in fluid dynamics as well as several other problems in science and engineering.
Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment
International Nuclear Information System (INIS)
Bernauer, J.C.; Diefenbach, J.; Elbakian, G.; Gavrilov, G.; Goerrissen, N.; Hasell, D.K.; Henderson, B.S.; Holler, Y.; Karyan, G.; Ludwig, J.; Marukyan, H.; Naryshkin, Y.; O'Connor, C.; Russell, R.L.; Schmidt, A.; Schneekloth, U.; Suvorov, K.; Veretennikov, D.
2016-01-01
The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.
Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment
Energy Technology Data Exchange (ETDEWEB)
Bernauer, J.C. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Diefenbach, J. [Hampton University, Hampton, VA (United States); Elbakian, G. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Gavrilov, G. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); Goerrissen, N. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Hasell, D.K.; Henderson, B.S. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Holler, Y. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Karyan, G. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Ludwig, J. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Marukyan, H. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Naryshkin, Y. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); O' Connor, C.; Russell, R.L.; Schmidt, A. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Schneekloth, U. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Suvorov, K.; Veretennikov, D. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation)
2016-07-01
The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.
Control theoretic splines optimal control, statistical, and path planning
Egerstedt, Magnus
2010-01-01
Splines, both interpolatory and smoothing, have a long and rich history that has largely been application driven. This book unifies these constructions in a comprehensive and accessible way, drawing from the latest methods and applications to show how they arise naturally in the theory of linear control systems. Magnus Egerstedt and Clyde Martin are leading innovators in the use of control theoretic splines to bring together many diverse applications within a common framework. In this book, they begin with a series of problems ranging from path planning to statistics to approximation.
International Nuclear Information System (INIS)
Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkin, Halim; Çevik, Uğur
2015-01-01
In this study, compliance of geostatistical estimation methods is compared to ensure investigation and imaging natural Fon radiation using the minimum number of data. Artvin province, which has a quite hilly terrain and wide variety of soil and located in the north–east of Turkey, is selected as the study area. Outdoor gamma dose rate (OGDR), which is an important determinant of environmental radioactivity level, is measured in 204 stations. Spatial structure of OGDR is determined by anisotropic, isotropic and residual variograms. Ordinary kriging (OK) and universal kriging (UK) interpolation estimations were calculated with the help of model parameters obtained from these variograms. In OK, although calculations are made based on positions of points where samples are taken, in the UK technique, general soil groups and altitude values directly affecting OGDR are included in the calculations. When two methods are evaluated based on their performances, it has been determined that UK model (r = 0.88, p < 0.001) gives quite better results than OK model (r = 0.64, p < 0.001). In addition, as a result of the maps created at the end of the study, it was illustrated that local changes are better reflected by UK method compared to OK method and its error variance is found to be lower. - Highlights: • The spatial dispersion of gamma dose rates in Artvin, which possesses one of the roughest lands in Turkey were studied. • The performance of different Geostatistic methods (OK and UK methods) for dispersion of gamma dose rates were compared. • Estimation values were calculated for non-sampling points by using the geostatistical model, the results were mapped. • The general radiological structure was determined in much less time with lower costs compared to experimental methods. • When theoretical methods are evaluated, it was obtained that UK gives more descriptive results compared to OK.
International Nuclear Information System (INIS)
Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica
1990-01-01
This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs
Distance-two interpolation for parallel algebraic multigrid
International Nuclear Information System (INIS)
Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M
2007-01-01
In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol; Erleben, Kenny
We present a method for simulating the active contraction of deformable models, usable for interactive animation of soft deformable objects. We present a novel physical principle as the governing equation for the coupling between the low dimensional 1D activation force model and the higher...
Directory of Open Access Journals (Sweden)
Kuczyński Paweł
2014-06-01
Full Text Available The paper deals with a solution of radiation heat transfer problems in enclosures filled with nonparticipating medium using ray tracing on hierarchical ortho-Cartesian meshes. The idea behind the approach is that radiative heat transfer problems can be solved on much coarser grids than their counterparts from computational fluid dynamics (CFD. The resulting code is designed as an add-on to OpenFOAM, an open-source CFD program. Ortho-Cartesian mesh involving boundary elements is created based upon CFD mesh. Parametric non-uniform rational basis spline (NURBS surfaces are used to define boundaries of the enclosure, allowing for dealing with domains of complex shapes. Algorithm for determining random, uniformly distributed locations of rays leaving NURBS surfaces is described. The paper presents results of test cases assuming gray diffusive walls. In the current version of the model the radiation is not absorbed within gases. However, the ultimate aim of the work is to upgrade the functionality of the model, to problems in absorbing, emitting and scattering medium projecting iteratively the results of radiative analysis on CFD mesh and CFD solution on radiative mesh.
The semi-Lagrangian method on curvilinear grids
Directory of Open Access Journals (Sweden)
Hamiaz Adnane
2016-09-01
Full Text Available We study the semi-Lagrangian method on curvilinear grids. The classical backward semi-Lagrangian method [1] preserves constant states but is not mass conservative. Natural reconstruction of the field permits nevertheless to have at least first order in time conservation of mass, even if the spatial error is large. Interpolation is performed with classical cubic splines and also cubic Hermite interpolation with arbitrary reconstruction order of the derivatives. High odd order reconstruction of the derivatives is shown to be a good ersatz of cubic splines which do not behave very well as time step tends to zero. A conservative semi-Lagrangian scheme along the lines of [2] is then described; here conservation of mass is automatically satisfied and constant states are shown to be preserved up to first order in time.
Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkın, Halim; Çevik, Uğur
2017-09-01
The aim of this study was to determine spatial risk dispersion of ambient gamma dose rate (AGDR) by using both artificial neural network (ANN) and fuzzy logic (FL) methods, compare the performances of methods, make dose estimations for intermediate stations with no previous measurements and create dose rate risk maps of the study area. In order to determine the dose distribution by using artificial neural networks, two main networks and five different network structures were used; feed forward ANN; Multi-layer perceptron (MLP), Radial basis functional neural network (RBFNN), Quantile regression neural network (QRNN) and recurrent ANN; Jordan networks (JN), Elman networks (EN). In the evaluation of estimation performance obtained for the test data, all models appear to give similar results. According to the cross-validation results obtained for explaining AGDR distribution, Pearson's r coefficients were calculated as 0.94, 0.91, 0.89, 0.91, 0.91 and 0.92 and RMSE values were calculated as 34.78, 43.28, 63.92, 44.86, 46.77 and 37.92 for MLP, RBFNN, QRNN, JN, EN and FL, respectively. In addition, spatial risk maps showing distributions of AGDR of the study area were created by all models and results were compared with geological, topological and soil structure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Göl, Ceyhun; Bulut, Sinan; Bolat, Ferhat
2017-10-01
The purpose of this research is to compare the spatial variability of soil organic carbon (SOC) in four adjacent land uses including the cultivated area, the grassland area, the plantation area and the natural forest area in the semi - arid region of Black Sea backward region of Turkey. Some of the soil properties, including total nitrogen, SOC, soil organic matter, and bulk density were measured on a grid with a 50 m sampling distance on the top soil (0-15 cm depth). Accordingly, a total of 120 samples were taken from the four adjacent land uses. Data was analyzed using geostatistical methods. The methods used were: Block kriging (BK), co - kriging (CK) with organic matter, total nitrogen and bulk density as auxiliary variables and inverse distance weighting (IDW) methods with the power of 1, 2 and 4. The methods were compared using a performance criteria that included root mean square error (RMSE), mean absolute error (MAE) and the coefficient of correlation (r). The one - way ANOVA test showed that differences between the natural (0.6653 ± 0.2901) - plantation forest (0.7109 ± 0.2729) areas and the grassland (1.3964 ± 0.6828) - cultivated areas (1.5851 ± 0.5541) were statistically significant at 0.05 level (F = 28.462). The best model for describing spatially variation of SOC was CK with the lowest error criteria (RMSE = 0.3342, MAE = 0.2292) and the highest coefficient of correlation (r = 0.84). The spatial structure of SOC could be well described by the spherical model. The nugget effect indicated that SOC was moderately dependent on the study area. The error distributions of the model showed that the improved model was unbiased in predicting the spatial distribution of SOC. This study's results revealed that an explanatory variable linked SOC increased success of spatial interpolation methods. In subsequent studies, this case should be taken into account for reaching more accurate outputs.
Approximate Implicitization of Parametric Curves Using Cubic Algebraic Splines
Directory of Open Access Journals (Sweden)
Xiaolei Zhang
2009-01-01
Full Text Available This paper presents an algorithm to solve the approximate implicitization of planar parametric curves using cubic algebraic splines. It applies piecewise cubic algebraic curves to give a global G2 continuity approximation to planar parametric curves. Approximation error on approximate implicitization of rational curves is given. Several examples are provided to prove that the proposed method is flexible and efficient.
Spline function fit for multi-sets of correlative data
International Nuclear Information System (INIS)
Liu Tingjin; Zhou Hongmo
1992-01-01
The Spline fit method for multi-sets of correlative data is developed. The properties of correlative data fit are investigated. The data of 23 Na(n, 2n) cross section are fitted in the cases with and without correlation
Nair, Abilash R.
Recent mechanical characterization experiments with pultruded E-Glass / polypropylene (PP) and compression molded E-Glass/Nylon-6 composite samples with 3-4 weight% nanoclay and baseline polymer (polymer without nanoclay) confirmed significant improvements in compressive strength (˜122%) and shear strength (˜60%) in the nanoclay modified nanocomposites, in comparison with baseline properties. Uniaxial tensile tests showed a small increase in tensile strength (˜3.4%) with 3 wt % nanoclay loading. While the synergistic reinforcing influence of nanoparticle reinforcement is obvious, a simple rule-of-mixtures approach fails to quantify the dramatic increase in mechanical properties. Consequently, there is an immediate need to investigate and understand the mechanisms at the nanoscale that are responsible for such unprecedented strength enhancements. In this work, an innovative and effective method to model nano-structured components in a thermoplastic polymer matrix is proposed. Effort will be directed towards finding fundamental answers to the reasons for significant changes in mechanical properties of nanoparticle-reinforced thermoplastic composites. This research ensues a multiscale modeling approach in which (a) a concurrent simulations scheme is developed to visualize atomistic behavior of polymer molecules as a function of continuum scale loading conditions and (b) a novel nanoscale damage mechanics model is proposed to capture the constitutive behavior of polymer nano composites (PNC). The proposed research will contribute towards the understanding of advanced nanostructured composite materials, which should subsequently benefit the composites manufacturing industry.
Performance of various mathematical methods for calculation of radioimmunoassay results
International Nuclear Information System (INIS)
Sandel, P.; Vogt, W.
1977-01-01
Interpolation and regression methods are available for computer aided determination of radioimmunological end results. We compared the performance of eight algorithms (weighted and unweighted linear logit-log regression, quadratic logit-log regression, Rodbards logistic model in the weighted and unweighted form, smoothing spline interpolation with a large and small smoothing factor and polygonal interpolation) on the basis of three radioimmunoassays with different reference curve characteristics (digoxin, estriol, human chorionic somatomammotropin = HCS). Great store was set by the accuracy of the approximation at the intermediate points on the curve, ie. those points that lie midway between two standard concentrations. These concentrations were obtained by weighing and inserted as unknown samples. In the case of digoxin and estriol the polygonal interpolation provided the best results while the weighted logit-log regression proved superior in the case of HCS. (orig.) [de
Directory of Open Access Journals (Sweden)
Marta Béjar-Pizarro
2016-11-01
Full Text Available Land subsidence resulting from groundwater extractions is a global phenomenon adversely affecting many regions worldwide. Understanding the governing processes and mitigating associated hazards require knowing the spatial distribution of the implicated factors (piezometric levels, lithology, ground deformation, usually only known at discrete locations. Here, we propose a methodology based on the Kriging with External Drift (KED approach to interpolate sparse point measurements of variables influencing land subsidence using high density InSAR measurements. In our study, located in the Alto Guadalentín basin, SE Spain, these variables are GPS vertical velocities and the thickness of compressible soils. First, we estimate InSAR and GPS rates of subsidence covering the periods 2003–2010 and 2004–2013, respectively. Then, we apply the KED method to the discrete variables. The resulting continuous GPS velocity map shows maximum subsidence rates of 13 cm/year in the center of the basin, in agreement with previous studies. The compressible deposits thickness map is significantly improved. We also test the coherence of Sentinel-1 data in the study region and evaluate the applicability of this methodology with the new satellite, which will improve the monitoring of aquifer-related subsidence and the mapping of variables governing this phenomenon.
Construction of local integro quintic splines
Directory of Open Access Journals (Sweden)
T. Zhanlav
2016-06-01
Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.
Multiscale empirical interpolation for solving nonlinear PDEs
Calo, Victor M.
2014-12-01
In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.
Interpolation of quasi-Banach spaces
International Nuclear Information System (INIS)
Tabacco Vignati, A.M.
1986-01-01
This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces
Interpolation of rational matrix functions
Ball, Joseph A; Rodman, Leiba
1990-01-01
This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...
Interpolating string field theories
International Nuclear Information System (INIS)
Zwiebach, B.
1992-01-01
This paper reports that a minimal area problem imposing different length conditions on open and closed curves is shown to define a one-parameter family of covariant open-closed quantum string field theories. These interpolate from a recently proposed factorizable open-closed theory up to an extended version of Witten's open string field theory capable of incorporating on shell closed strings. The string diagrams of the latter define a new decomposition of the moduli spaces of Riemann surfaces with punctures and boundaries based on quadratic differentials with both first order and second order poles
The Diffraction Response Interpolation Method
DEFF Research Database (Denmark)
Jespersen, Søren Kragh; Wilhjelm, Jens Erik; Pedersen, Peder C.
1998-01-01
Computer modeling of the output voltage in a pulse-echo system is computationally very demanding, particularly whenconsidering reflector surfaces of arbitrary geometry. A new, efficient computational tool, the diffraction response interpolationmethod (DRIM), for modeling of reflectors in a fluid ...
Characterizing vaccine-associated risks using cubic smoothing splines.
Brookhart, M Alan; Walker, Alexander M; Lu, Yun; Polakowski, Laura; Li, Jie; Paeglow, Corrie; Puenpatom, Tosmai; Izurieta, Hector; Daniel, Gregory W
2012-11-15
Estimating risks associated with the use of childhood vaccines is challenging. The authors propose a new approach for studying short-term vaccine-related risks. The method uses a cubic smoothing spline to flexibly estimate the daily risk of an event after vaccination. The predicted incidence rates from the spline regression are then compared with the expected rates under a log-linear trend that excludes the days surrounding vaccination. The 2 models are then used to estimate the excess cumulative incidence attributable to the vaccination during the 42-day period after vaccination. Confidence intervals are obtained using a model-based bootstrap procedure. The method is applied to a study of known effects (positive controls) and expected noneffects (negative controls) of the measles, mumps, and rubella and measles, mumps, rubella, and varicella vaccines among children who are 1 year of age. The splines revealed well-resolved spikes in fever, rash, and adenopathy diagnoses, with the maximum incidence occurring between 9 and 11 days after vaccination. For the negative control outcomes, the spline model yielded a predicted incidence more consistent with the modeled day-specific risks, although there was evidence of increased risk of diagnoses of congenital malformations after vaccination, possibly because of a "provider visit effect." The proposed approach may be useful for vaccine safety surveillance.
A Stochastic Wavelet Finite Element Method for 1D and 2D Structures Analysis
Xingwu Zhang; Xuefeng Chen; Zhibo Yang; Bing Li; Zhengjia He
2014-01-01
A stochastic finite element method based on B-spline wavelet on the interval (BSWI-SFEM) is presented for static analysis of 1D and 2D structures in this paper. Instead of conventional polynomial interpolation, the scaling functions of BSWI are employed to construct the displacement field. By means of virtual work principle and BSWI, the wavelet finite elements of beam, plate, and plane rigid frame are obtained. Combining the Monte Carlo method and the constructed BSWI elements together, the...
Experimental Performance of Spatial Interpolators for Ground Water Salinity
International Nuclear Information System (INIS)
Alsaaran, Nasser A.
2005-01-01
Mapping groundwater qualities requires either sampling on a fine regular grid or spatial interpolation. The latter is usually used because the cost of the former is prohibitive. Experimental performance of five spatial interpolators for groundwater salinity was investigated using cross validation. The methods included ordinary kriging (OK), lognormal kriging, inverse distance, inverse squared distance and inverse cubed distance. The results show that OK outperformed other interpolators in terms of bias. Interpolation accuracy based on mean absolute difference criterion is relatively high for all interpolators with small difference among them. While three-dimensional surfaces produced by all inverse distance based procedures are dominated by isolated peaks and pits, surfaces produced by kriging are free from localized pits and peaks, and show areas of low groundwater salinity as elongated basins and areas of high salinity as ridges, which make regional trends easy to identify. Considering all criteria, OK was judged to be the most suitable spatial interpolator for groundwater salinity in this study. (author)
Steady State Stokes Flow Interpolation for Fluid Control
DEFF Research Database (Denmark)
Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert
2012-01-01
Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...
INTERPOL's Surveillance Network in Curbing Transnational Terrorism
Gardeazabal, Javier; Sandler, Todd
2015-01-01
Abstract This paper investigates the role that International Criminal Police Organization (INTERPOL) surveillance—the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND)—played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applying methods developed in the treatment‐effects literature, this paper establishes that countries adopting MIND/FIND experienced fewer transnational terrorist attacks than they would have had they not adopted MIND/FIND. Our estimates indicate that, on average, from 2008 to 2011, adopting and using MIND/FIND results in 0.5 fewer transnational terrorist incidents each year per 100 million people. Thus, a country like France with a population just above 64 million people in 2008 would have 0.32 fewer transnational terrorist incidents per year owing to its use of INTERPOL surveillance. This amounts to a sizeable average proportional reduction of about 30 percent.
Modelling Childhood Growth Using Fractional Polynomials and Linear Splines
Tilling, Kate; Macdonald-Wallis, Corrie; Lawlor, Debbie A.; Hughes, Rachael A.; Howe, Laura D.
2014-01-01
Background There is increasing emphasis in medical research on modelling growth across the life course and identifying factors associated with growth. Here, we demonstrate multilevel models for childhood growth either as a smooth function (using fractional polynomials) or a set of connected linear phases (using linear splines). Methods We related parental social class to height from birth to 10 years of age in 5,588 girls from the Avon Longitudinal Study of Parents and Children (ALSPAC). Multilevel fractional polynomial modelling identified the best-fitting model as being of degree 2 with powers of the square root of age, and the square root of age multiplied by the log of age. The multilevel linear spline model identified knot points at 3, 12 and 36 months of age. Results Both the fractional polynomial and linear spline models show an initially fast rate of growth, which slowed over time. Both models also showed that there was a disparity in length between manual and non-manual social class infants at birth, which decreased in magnitude until approximately 1 year of age and then increased. Conclusions Multilevel fractional polynomials give a more realistic smooth function, and linear spline models are easily interpretable. Each can be used to summarise individual growth trajectories and their relationships with individual-level exposures. PMID:25413651
Numerical Solutions for Convection-Diffusion Equation through Non-Polynomial Spline
Directory of Open Access Journals (Sweden)
Ravi Kanth A.S.V.
2016-01-01
Full Text Available In this paper, numerical solutions for convection-diffusion equation via non-polynomial splines are studied. We purpose an implicit method based on non-polynomial spline functions for solving the convection-diffusion equation. The method is proven to be unconditionally stable by using Von Neumann technique. Numerical results are illustrated to demonstrate the efficiency and stability of the purposed method.
SPLINE-FUNCTIONS IN THE TASK OF THE FLOW AIRFOIL PROFILE
Directory of Open Access Journals (Sweden)
Mikhail Lopatjuk
2013-12-01
Full Text Available The method and the algorithm of solving the problem of streamlining are presented. Neumann boundary problem is reduced to the solution of integral equations with given boundary conditions using the cubic spline-functions
The Use of Wavelets in Image Interpolation: Possibilities and Limitations
Directory of Open Access Journals (Sweden)
M. Grgic
2007-12-01
Full Text Available Discrete wavelet transform (DWT can be used in various applications, such as image compression and coding. In this paper we examine how DWT can be used in image interpolation. Afterwards proposed method is compared with two other traditional interpolation methods. For the case of magnified image achieved by interpolation, original image is unknown and there is no perfect way to judge the magnification quality. Common approach is to start with an original image, generate a lower resolution version of original image by downscaling, and then use different interpolation methods to magnify low resolution image. After that original and magnified images are compared to evaluate difference between them using different picture quality measures. Our results show that comparison of image interpolation methods depends on downscaling technique, image contents and quality metric. For fair comparison all these parameters need to be considered.
Higher-order numerical solutions using cubic splines. [for partial differential equations
Rubin, S. G.; Khosla, P. K.
1975-01-01
A cubic spline collocation procedure has recently been developed for the numerical solution of partial differential equations. In the present paper, this spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a non-uniform mesh and overall fourth-order accuracy for a uniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, will be presented for several model problems.-
Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines
Tan, Yunhao; Hua, Jing; Qin, Hong
2009-01-01
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636
Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System
Directory of Open Access Journals (Sweden)
Guang Zhao
2008-01-01
Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.
Visualizing and Understanding the Components of Lagrange and Newton Interpolation
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…
Optimal interpolation schemes for particle tracking in turbulence
van Hinsberg, M.A.T.; ten Thije Boonkkamp, J.H.M.; Toschi, F.; Clercx, H.J.H.
2013-01-01
An important aspect in numerical simulations of particle-laden turbulent flows is the interpolation of the flow field needed for the computation of the Lagrangian trajectories. The accuracy of the interpolation method has direct consequences for the acceleration spectrum of the fluid particles and
A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY
The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...
A disposition of interpolation techniques
Knotters, M.; Heuvelink, G.B.M.
2010-01-01
A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated
Numerical solution of system of boundary value problems using B-spline with free parameter
Gupta, Yogesh
2017-01-01
This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.
Directory of Open Access Journals (Sweden)
Saira Esar Esar
2017-06-01
Full Text Available Cubic splines are commonly used for capturing the changes in economic analysis. This is because of the fact that traditional regression including polynomial regression fail to capture the underlying changes in the corresponding response variables. Moreover, these variables do not change monotonically, i.e. there are discontinuities in the trend of these variables over a period of time. The objective of this research is to explain the movement of under-five child mortality in Pakistan over the past few decades through a combination of statistical techniques. While cubic splines explain the movement of under-five child mortality to a large extent, we cannot deny the possibility that splines with fractional powers might better explain the underlying movement. . Hence, we estimated the value of fractional power by nonlinear regression method and used it to develop the fractional splines. Although, the fractional spline model may have the potential to improve upon the cubic spline model, it does not demonstrate a real improvement in results of this case, but, perhaps, with a different data set.
Optimal Approximation of Biquartic Polynomials by Bicubic Splines
Directory of Open Access Journals (Sweden)
Kačala Viliam
2018-01-01
The goal of this paper is to resolve this problem. Unlike the spline curves, in the case of spline surfaces it is insufficient to suppose that the grid should be uniform and the spline derivatives computed from a biquartic polynomial. We show that the biquartic polynomial coefficients have to satisfy some additional constraints to achieve optimal approximation by bicubic splines.
On convexity and Schoenberg's variation diminishing splines
International Nuclear Information System (INIS)
Feng, Yuyu; Kozak, J.
1992-11-01
In the paper we characterize a convex function by the monotonicity of a particular variation diminishing spline sequence. The result extends the property known for the Bernstein polynomial sequence. (author). 4 refs
Application Of Laplace Interpolation In The Analysis Of Geopotential ...
African Journals Online (AJOL)
difference) method can be applied to regions of high data gradients without distortions and smoothing. However, by itself, this method is not convenient for the interpolation of geophysical data, which often consists of regions of widely variable ...
Cubic Splines for Trachea and Bronchial Tubes Grid Generation
Directory of Open Access Journals (Sweden)
Eliandro Rodrigues Cirilo
2006-02-01
Full Text Available Grid generation plays an important role in the development of efficient numerical techniques for solving complex flows. Therefore, the present work develops a method for bidimensional blocks structured grid generation for geometries such as the trachea and bronchial tubes. A set of 55 blocks completes the geometry, whose contours are defined by cubic splines. Besides, this technique build on early ones because of its simplicity and efficiency in terms of very complex geometry grid generation.
Interpolation of intermolecular potentials using Gaussian processes
Uteva, Elena; Graham, Richard S.; Wilkinson, Richard D.; Wheatley, Richard J.
2017-10-01
A procedure is proposed to produce intermolecular potential energy surfaces from limited data. The procedure involves generation of geometrical configurations using a Latin hypercube design, with a maximin criterion, based on inverse internuclear distances. Gaussian processes are used to interpolate the data, using over-specified inverse molecular distances as covariates, greatly improving the interpolation. Symmetric covariance functions are specified so that the interpolation surface obeys all relevant symmetries, reducing prediction errors. The interpolation scheme can be applied to many important molecular interactions with trivial modifications. Results are presented for three systems involving CO2, a system with a deep energy minimum (HF-HF), and a system with 48 symmetries (CH4-N2). In each case, the procedure accurately predicts an independent test set. Training this method with high-precision ab initio evaluations of the CO2-CO interaction enables a parameter-free, first-principles prediction of the CO2-CO cross virial coefficient that agrees very well with experiments.
Numerical simulation of Burgers' equation using cubic B-splines
Lakshmi, C.; Awasthi, Ashish
2017-03-01
In this paper, a numerical θ scheme is proposed for solving nonlinear Burgers' equation. By employing Hopf-Cole transformation, the nonlinear Burgers' equation is linearized to the linear Heat equation. The resulting Heat equation is further solved by cubic B-splines. The time discretization of linear Heat equation is carried out using Crank-Nicolson scheme (θ = {1 \\over 2}) as well as backward Euler scheme (θ = 1). Accuracy in temporal direction is improved by using Richardson extrapolation. This method hence possesses fourth order accuracy both in space and time. The system of matrix which arises by using cubic splines is always diagonal. Therefore, working with splines has the advantage of reduced computational cost and easy implementation. Stability of the schemes have been discussed in detail and shown to be unconditionally stable. Three examples have been examined and the L2 and L∞ error norms have been calculated to establish the performance of the method. The numerical results obtained on applying this method have shown to give more accurate results than existing works of Kutluay et al. [1], Ozis et al. [2], Dag et al. [3], Salkuyeh et al. [4] and Korkmaz et al. [5].
Parametric Integration by Magic Point Empirical Interpolation
Gaß, Maximilian; Glau, Kathrin
2015-01-01
We derive analyticity criteria for explicit error bounds and an exponential rate of convergence of the magic point empirical interpolation method introduced by Barrault et al. (2004). Furthermore, we investigate its application to parametric integration. We find that the method is well-suited to Fourier transforms and has a wide range of applications in such diverse fields as probability and statistics, signal and image processing, physics, chemistry and mathematical finance. To illustrate th...
Parametric Integration by Magic Point Empirical Interpolation
Gaß, M., Glau, K.
2016-01-01
We derive analyticity criteria for explicit error bounds and an exponential rate of convergence of the magic point empirical interpolation method introduced by Barrault et al. (2004). Furthermore, we investigate its application to parametric integration. We find that the method is well-suited to Fourier transforms and has a wide range of applications in such diverse fields as probability and statistics, signal and image processing, physics, chemistry and mathematical finance. To illustrate th...
Data reduction using cubic rational B-splines
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
Testing for cubic smoothing splines under dependent data.
Nummi, Tapio; Pan, Jianxin; Siren, Tarja; Liu, Kun
2011-09-01
In most research on smoothing splines the focus has been on estimation, while inference, especially hypothesis testing, has received less attention. By defining design matrices for fixed and random effects and the structure of the covariance matrices of random errors in an appropriate way, the cubic smoothing spline admits a mixed model formulation, which places this nonparametric smoother firmly in a parametric setting. Thus nonlinear curves can be included with random effects and random coefficients. The smoothing parameter is the ratio of the random-coefficient and error variances and tests for linear regression reduce to tests for zero random-coefficient variances. We propose an exact F-test for the situation and investigate its performance in a real pine stem data set and by simulation experiments. Under certain conditions the suggested methods can also be applied when the data are dependent. © 2010, The International Biometric Society.
High-order numerical solutions using cubic splines
Rubin, S. G.; Khosla, P. K.
1975-01-01
The cubic spline collocation procedure for the numerical solution of partial differential equations was reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy for a nonuniform mesh and overall fourth-order accuracy for a uniform mesh. Application of the technique was made to the Burger's equation, to the flow around a linear corner, to the potential flow over a circular cylinder, and to boundary layer problems. The results confirmed the higher-order accuracy of the spline method and suggest that accurate solutions for more practical flow problems can be obtained with relatively coarse nonuniform meshes.
Spatial interpolation of monthly mean air temperature data for Latvia
Aniskevich, Svetlana
2016-04-01
Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.
A complete S-shape feed rate scheduling approach for NURBS interpolator
Directory of Open Access Journals (Sweden)
Xu Du
2015-10-01
Full Text Available This paper presents a complete S-shape feed rate scheduling approach (CSFA with confined jerk, acceleration and command feed rate for parametric tool path. For a Non-Uniform Rational B-Spline (NURBS tool path, the critical points of the tool path where the radius of curvature reaches extreme values are found firstly. Then, the NURBS curve is split into several NURBS sub-curves or blocks by the critical points. A bidirectional scanning strategy with the limitations of chord error, normal/tangential acceleration/jerk and command feed rate is employed to make the feed rate at the junctions between different NURBS blocks continuous. To improve the efficiency of the feed rate scheduling, the NURBS block is classified into three types: short block, medium block and long block. The feed rate profile corresponding to each NURBS block is generated according to the start/end feed rates and the arc length of the block and the limitations of tangential acceleration/jerk. In addition, two compensation strategies are proposed to make the feed rate more continuous and the arc increment more precise. Once the feed rate profile is determined, a second-order Taylor׳s expansion interpolation method is applied to generate the position commands. Finally, experiments with two free-form NURBS curves are conducted to verify the applicability and accuracy of the proposed method.
Placing Spline Knots in Neural Networks Using Splines as Activation Functions
Czech Academy of Sciences Publication Activity Database
Hlaváčková, Kateřina; Verleysen, M.
1997-01-01
Roč. 17, 3/4 (1997), s. 159-166 ISSN 0925-2312 R&D Projects: GA ČR GA201/93/0427; GA ČR GA201/96/0971 Keywords : cubic -spline function * approximation error * knots of spline function * feedforward neural network Impact factor: 0.422, year: 1997
Türker, Tugba; Bayrak, Yusuf
2017-12-01
In this study, A Bayesian approach based on Spline (B-spline) function is used to estimate the spatial variations of the seismic b-values of the empirical law (G-R law) in the North Anatolian Fault Zone (NAFZ), North of Turkey. B-spline function method developed for estimation and interpolation of b-values. Spatial variations in b-values are known to reflect the stress field and can be used in earthquake hazard analysis. We proposed that b-values combined with seismicity and tectonic background. β=b*ln(10) function (the derivation of the G-R law) based on a Bayesian approach is used to estimate the b values and their standard deviations. A homogeneous instrumental catalog is used during the period 1900-2017. We divided into ten different seismic source regions based on epicenter distribution, tectonic, seismicity, faults in NAFZ. Three historical earthquakes (1343, MS = 7. 5, 1766, Ms=7.3, 1894, MS = 7. 0) are included in region 2 (Marmara Sea (Tekirdağ-Merkez-Kumburgaz-Çmarcik Basins)) where a large earthquake is expected in the near future because of a large earthquake hasn’t been observed for the instrumental period. The spatial variations in ten different seismogenic regions are estimated in NAFZ. In accordance with estimates, b-values are changed between 0.52±0.07 and 0.86±0.13. The high b values are estimated the Southern Branch of NAFZ (Edremit Fault Zones, Yenice-Gönen, Mustafa Kemal Paşa, Ulubat Faults) region, so it is related low stress. The low b values are estimated between Tokat-Erzincan region, so it is related high stress. The maps of 2D and 3D spatial variations (2D contour maps, classed post maps (a group the data into discrete classes), image maps (raster maps based on grid files), 3D wireframe (three-dimensional representations of grid files) and 3D surface) are plotted to the b-values. The spatial variations b-values can be used earthquake hazard analysis for NAFZ.
Hintzen, N.T.; Piet, G.J.; Brunel, T.P.A.
2010-01-01
For control and enforcement purposes, all fishing vessels operating in European waters are equipped with satellite-based Vessel Monitoring by Satellite systems (VMS) recording their position at regular time intervals. VMS data are increasingly used by scientists to study spatial and temporal
Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol
2017-10-24
Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.
A parameterization of observer-based controllers: Bumpless transfer by covariance interpolation
DEFF Research Database (Denmark)
Stoustrup, Jakob; Komareji, Mohammad
2009-01-01
This paper presents an algorithm to interpolate between two observer-based controllers for a linear multivariable system such that the closed loop system remains stable throughout the interpolation. The method interpolates between the inverse Lyapunov functions for the two original state feedbacks...
Acoustic Emission Signatures of Fatigue Damage in Idealized Bevel Gear Spline for Localized Sensing
Directory of Open Access Journals (Sweden)
Lu Zhang
2017-06-01
Full Text Available In many rotating machinery applications, such as helicopters, the splines of an externally-splined steel shaft that emerges from the gearbox engage with the reverse geometry of an internally splined driven shaft for the delivery of power. The splined section of the shaft is a critical and non-redundant element which is prone to cracking due to complex loading conditions. Thus, early detection of flaws is required to prevent catastrophic failures. The acoustic emission (AE method is a direct way of detecting such active flaws, but its application to detect flaws in a splined shaft in a gearbox is difficult due to the interference of background noise and uncertainty about the effects of the wave propagation path on the received AE signature. Here, to model how AE may detect fault propagation in a hollow cylindrical splined shaft, the splined section is essentially unrolled into a metal plate of the same thickness as the cylinder wall. Spline ridges are cut into this plate, a through-notch is cut perpendicular to the spline to model fatigue crack initiation, and tensile cyclic loading is applied parallel to the spline to propagate the crack. In this paper, the new piezoelectric sensor array is introduced with the purpose of placing them within the gearbox to minimize the wave propagation path. The fatigue crack growth of a notched and flattened gearbox spline component is monitored using a new piezoelectric sensor array and conventional sensors in a laboratory environment with the purpose of developing source models and testing the new sensor performance. The AE data is continuously collected together with strain gauges strategically positioned on the structure. A significant amount of continuous emission due to the plastic deformation accompanied with the crack growth is observed. The frequency spectra of continuous emissions and burst emissions are compared to understand the differences of plastic deformation and sudden crack jump. The
Delimiting areas of endemism through kernel interpolation.
Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Delimiting areas of endemism through kernel interpolation.
Directory of Open Access Journals (Sweden)
Ubirajara Oliveira
Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
Preference learning with evolutionary Multivariate Adaptive Regression Spline model
DEFF Research Database (Denmark)
Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll
2015-01-01
This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...
Energy Technology Data Exchange (ETDEWEB)
Ohta, Y. [Osaka Univ. (Japan). Faculty of Engineering] Gotou, O. [Sumitomo Metal Industries, Co. Ltd., Osaka (Japan)
1997-08-31
Interpolation point augmentation method has expanded the delay expansion method by dealing with not only each property of space but dealing with the property of broad weak topology and has provided the solution of multiblock continuous time L1. If the expansion block is formed to have unstable zero time it may not be limited by the delay of dispersion time like in delay expansion method because convergence is achieved from the property of weak topology partial space. Further, from the property of weak topology, it was possible to get optimum solution when applied to {Eta}{sub {infinity}} control problem. As for the application of interpolation point augmentation method, there are some problems regarding the convergence that are still to be solved. One of this problem is that the meaning of formation of expansion block is not clear and its relation with the existing theory is not sufficiently solved. Another problem is that the quasi-optimization when expansion is stopped on the way is not assured because the weak convergence is weaker than the norm convergence. In this report, all these points are discussed. 9 refs., 1 tab.
Nonlinear interpolation fractal classifier for multiple cardiac arrhythmias recognition
Energy Technology Data Exchange (ETDEWEB)
Lin, C.-H. [Department of Electrical Engineering, Kao-Yuan University, No. 1821, Jhongshan Rd., Lujhu Township, Kaohsiung County 821, Taiwan (China); Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)], E-mail: eechl53@cc.kyu.edu.tw; Du, Y.-C.; Chen Tainsong [Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)
2009-11-30
This paper proposes a method for cardiac arrhythmias recognition using the nonlinear interpolation fractal classifier. A typical electrocardiogram (ECG) consists of P-wave, QRS-complexes, and T-wave. Iterated function system (IFS) uses the nonlinear interpolation in the map and uses similarity maps to construct various data sequences including the fractal patterns of supraventricular ectopic beat, bundle branch ectopic beat, and ventricular ectopic beat. Grey relational analysis (GRA) is proposed to recognize normal heartbeat and cardiac arrhythmias. The nonlinear interpolation terms produce family functions with fractal dimension (FD), the so-called nonlinear interpolation function (NIF), and make fractal patterns more distinguishing between normal and ill subjects. The proposed QRS classifier is tested using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Compared with other methods, the proposed hybrid methods demonstrate greater efficiency and higher accuracy in recognizing ECG signals.
Tang, Youhua; Pagowski, Mariusz; Chai, Tianfeng; Pan, Li; Lee, Pius; Baker, Barry; Kumar, Rajesh; Delle Monache, Luca; Tong, Daniel; Kim, Hyun-Cheol
2017-12-01
This study applies the Gridpoint Statistical Interpolation (GSI) 3D-Var assimilation tool originally developed by the National Centers for Environmental Prediction (NCEP), to improve surface PM2.5 predictions over the contiguous United States (CONUS) by assimilating aerosol optical depth (AOD) and surface PM2.5 in version 5.1 of the Community Multi-scale Air Quality (CMAQ) modeling system. An optimal interpolation (OI) method implemented earlier (Tang et al., 2015) for the CMAQ modeling system is also tested for the same period (July 2011) over the same CONUS. Both GSI and OI methods assimilate surface PM2.5 observations at 00:00, 06:00, 12:00 and 18:00 UTC, and MODIS AOD at 18:00 UTC. The assimilations of observations using both GSI and OI generally help reduce the prediction biases and improve correlation between model predictions and observations. In the GSI experiments, assimilation of surface PM2.5 (particle matter with diameter root mean squared error (RMSE). It should be noted that the 3D-Var and OI methods used here have several big differences besides the data assimilation schemes. For instance, the OI uses relatively big model uncertainties, which helps yield smaller mean biases, but sometimes causes the RMSE to increase. We also examine and discuss the sensitivity of the assimilation experiments' results to the AOD forward operators.
Achieving high data reduction with integral cubic B-splines
Chou, Jin J.
1993-01-01
During geometry processing, tangent directions at the data points are frequently readily available from the computation process that generates the points. It is desirable to utilize this information to improve the accuracy of curve fitting and to improve data reduction. This paper presents a curve fitting method which utilizes both position and tangent direction data. This method produces G(exp 1) non-rational B-spline curves. From the examples, the method demonstrates very good data reduction rates while maintaining high accuracy in both position and tangent direction.
Research of Cubic Bezier Curve NC Interpolation Signal Generator
Directory of Open Access Journals (Sweden)
Shijun Ji
2014-08-01
Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.
Temporal interpolation in Meteosat images
DEFF Research Database (Denmark)
Larsen, Rasmus; Hansen, Johan Dore; Ersbøll, Bjarne Kjær
in such animated films are perceived as being jerky due to t he low temporal sampling rate in general and missing images in particular. In order to perform a satisfactory temporal interpolation we estimate and use the optical flow corresponding to every image in the sequenc e. The estimation of the optical flow...... a threshold between clouds and land/water. The temperature maps are estimated using observations from the image sequence itself at cloud free pixels and ground temperature measurements from a series of meteor ological observation stations in Europe. The temporal interpolation of the images is bas ed on a path...... of each pixel determined by the estimated optical flow. The performance of the algorithm is illustrated by the interpolation of a sequence of Meteosat infrared images....
Efficient Algorithms and Design for Interpolation Filters in Digital Receiver
Directory of Open Access Journals (Sweden)
Xiaowei Niu
2014-05-01
Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.
Cubic B-spline calibration for 3D super-resolution measurements using astigmatic imaging.
Proppert, Sven; Wolter, Steve; Holm, Thorge; Klein, Teresa; van de Linde, Sebastian; Sauer, Markus
2014-05-05
In recent years three-dimensional (3D) super-resolution fluorescence imaging by single-molecule localization (localization microscopy) has gained considerable interest because of its simple implementation and high optical resolution. Astigmatic and biplane imaging are experimentally simple methods to engineer a 3D-specific point spread function (PSF), but existing evaluation methods have proven problematic in practical application. Here we introduce the use of cubic B-splines to model the relationship of axial position and PSF width in the above mentioned approaches and compare the performance with existing methods. We show that cubic B-splines are the first method that can combine precision, accuracy and simplicity.
Calculation of reactivity without Lagrange interpolation
International Nuclear Information System (INIS)
Suescun D, D.; Figueroa J, J. H.; Rodriguez R, K. C.; Villada P, J. P.
2015-09-01
A new method to solve numerically the inverse equation of punctual kinetics without using Lagrange interpolating polynomial is formulated; this method uses a polynomial approximation with N points based on a process of recurrence for simulating different forms of nuclear power. The results show a reliable accuracy. Furthermore, the method proposed here is suitable for real-time measurements of reactivity, with step sizes of calculations greater that Δt = 0.3 s; due to its precision can be used to implement a digital meter of reactivity in real time. (Author)
Considerations Related to Interpolation of Experimental Data Using Piecewise Functions
Directory of Open Access Journals (Sweden)
Stelian Alaci
2016-12-01
Full Text Available The paper presents a method for experimental data interpolation by means of a piecewise function, the points where the form of the function changes being found simultaneously with the other parameters utilized in an optimization criterion. The optimization process is based on defining the interpolation function using a single expression founded on the Heaviside function and regarding the optimization function as a generalised infinitely derivable function. The exemplification of the methodology is made via a tangible example.
Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
2017-04-01
In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.
LIMIT STRESS SPLINE MODELS FOR GRP COMPOSITES
African Journals Online (AJOL)
ES OBE
Department of Mechanical Engineering, Anambra State. University of Science and Technology, Uli ... 12 were established. The optimization of quadratic and cubic models by gradient search optimization gave the critical strain as 0.024, .... 2.2.1 Derivation of Cubic Spline Equation. The basic assumptions to be used are: 1.
Weighted thin-plate spline image denoising
Czech Academy of Sciences Publication Activity Database
Kašpar, Roman; Zitová, Barbara
2003-01-01
Roč. 36, č. 12 (2003), s. 3027-3030 ISSN 0031-3203 R&D Projects: GA ČR GP102/01/P065 Institutional research plan: CEZ:AV0Z1075907 Keywords : image denoising * thin-plate splines Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.611, year: 2003
5-D interpolation with wave-front attributes
Xie, Yujiang; Gajewski, Dirk
2017-11-01
Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that
Interpol: An R package for preprocessing of protein sequences.
Heider, Dominik; Hoffmann, Daniel
2011-06-17
Most machine learning techniques currently applied in the literature need a fixed dimensionality of input data. However, this requirement is frequently violated by real input data, such as DNA and protein sequences, that often differ in length due to insertions and deletions. It is also notable that performance in classification and regression is often improved by numerical encoding of amino acids, compared to the commonly used sparse encoding. The software "Interpol" encodes amino acid sequences as numerical descriptor vectors using a database of currently 532 descriptors (mainly from AAindex), and normalizes sequences to uniform length with one of five linear or non-linear interpolation algorithms. Interpol is distributed with open source as platform independent R-package. It is typically used for preprocessing of amino acid sequences for classification or regression. The functionality of Interpol widens the spectrum of machine learning methods that can be applied to biological sequences, and it will in many cases improve their performance in classification and regression.
T-Spline Based Unifying Registration Procedure for Free-Form Surface Workpieces in Intelligent CMM
Directory of Open Access Journals (Sweden)
Zhenhua Han
2017-10-01
Full Text Available With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs. To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs.
Optoelectronic imaging of speckle using image processing method
Wang, Jinjiang; Wang, Pengfei
2018-01-01
A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.
Interpolating of climate data using R
Reinhardt, Katja
2017-04-01
Interpolation methods are used in many different geoscientific areas, such as soil physics, climatology and meteorology. Thereby, unknown values are calculated by using statistical calculation approaches applied on known values. So far, the majority of climatologists have been using computer languages, such as FORTRAN or C++, but there is also an increasing number of climate scientists using R for data processing and visualization. Most of them, however, are still working with arrays and vector based data which is often associated with complex R code structures. For the presented study, I have decided to convert the climate data into geodata and to perform the whole data processing using the raster package, gstat and similar packages, providing a much more comfortable way for data handling. A central goal of my approach is to create an easy to use, powerful and fast R script, implementing the entire geodata processing and visualization into a single and fully automated R based procedure, which allows avoiding the necessity of using other software packages, such as ArcGIS or QGIS. Thus, large amount of data with recurrent process sequences can be processed. The aim of the presented study, which is located in western Central Asia, is to interpolate wind data based on the European reanalysis data Era-Interim, which are available as raster data with a resolution of 0.75˚ x 0.75˚ , to a finer grid. Therefore, various interpolation methods are used: inverse distance weighting, the geostatistical methods ordinary kriging and regression kriging, generalized additve model and the machine learning algorithms support vector machine and neural networks. Besides the first two mentioned methods, the methods are used with influencing factors, e.g. geopotential and topography.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
INTERPOL's Surveillance Network in Curbing Transnational Terrorism
Gardeazabal, Javier; Sandler, Todd
2015-01-01
This paper investigates the role that INTERPOL surveillance – the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND) – played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applyi...
Directory of Open Access Journals (Sweden)
Bush William S
2009-12-01
Full Text Available Abstract Background Gene-centric analysis tools for genome-wide association study data are being developed both to annotate single locus statistics and to prioritize or group single nucleotide polymorphisms (SNPs prior to analysis. These approaches require knowledge about the relationships between SNPs on a genotyping platform and genes in the human genome. SNPs in the genome can represent broader genomic regions via linkage disequilibrium (LD, and population-specific patterns of LD can be exploited to generate a data-driven map of SNPs to genes. Methods In this study, we implemented LD-Spline, a database routine that defines the genomic boundaries a particular SNP represents using linkage disequilibrium statistics from the International HapMap Project. We compared the LD-Spline haplotype block partitioning approach to that of the four gamete rule and the Gabriel et al. approach using simulated data; in addition, we processed two commonly used genome-wide association study platforms. Results We illustrate that LD-Spline performs comparably to the four-gamete rule and the Gabriel et al. approach; however as a SNP-centric approach LD-Spline has the added benefit of systematically identifying a genomic boundary for each SNP, where the global block partitioning approaches may falter due to sampling variation in LD statistics. Conclusion LD-Spline is an integrated database routine that quickly and effectively defines the genomic region marked by a SNP using linkage disequilibrium, with a SNP-centric block definition algorithm.
Directory of Open Access Journals (Sweden)
Francesca Galassi
Full Text Available Assessment of coronary stenosis severity is crucial in clinical practice. This study proposes a novel method to generate 3D models of stenotic coronary arteries, directly from 2D coronary images, and suitable for immediate assessment of the stenosis severity.From multiple 2D X-ray coronary arteriogram projections, 2D vessels were extracted. A 3D centreline was reconstructed as intersection of surfaces from corresponding branches. Next, 3D luminal contours were generated in a two-step process: first, a Non-Uniform Rational B-Spline (NURBS circular contour was designed and, second, its control points were adjusted to interpolate computed 3D boundary points. Finally, a 3D surface was generated as an interpolation across the control points of the contours and used in the analysis of the severity of a lesion. To evaluate the method, we compared 3D reconstructed lesions with Optical Coherence Tomography (OCT, an invasive imaging modality that enables high-resolution endoluminal visualization of lesion anatomy.Validation was performed on routine clinical data. Analysis of paired cross-sectional area discrepancies indicated that the proposed method more closely represented OCT contours than conventional approaches in luminal surface reconstruction, with overall root-mean-square errors ranging from 0.213mm2 to 1.013mm2, and maximum error of 1.837mm2. Comparison of volume reduction due to a lesion with corresponding FFR measurement suggests that the method may help in estimating the physiological significance of a lesion.The algorithm accurately reconstructed 3D models of lesioned arteries and enabled quantitative assessment of stenoses. The proposed method has the potential to allow immediate analysis of the stenoses in clinical practice, thereby providing incremental diagnostic and prognostic information to guide treatments in real time and without the need for invasive techniques.
Chen, Xiangdong; He, Liwen; Jeon, Gwanggil; Jeong, Jechang
2014-05-01
In this paper, we present a novel color image demosaicking algorithm based on a directional weighted interpolation method and gradient inverse-weighted filter-based refinement method. By applying a directional weighted interpolation method, the missing center pixel is interpolated, and then using the nearest neighboring pixels of the pre-interpolated pixel within the same color channel, the accuracy of interpolation is refined using a five-point gradient inverse weighted filtering method we proposed. The refined interpolated pixel values can be used to estimate the other missing pixel values successively according to the correlation inter-channels. Experimental analysis of images revealed that our proposed algorithm provided superior performance in terms of both objective and subjective image quality compared to conventional state-of-the-art demosaicking algorithms. Our implementation has very low complexity and is therefore well suited for real-time applications.
Semisupervised feature selection via spline regression for video semantic recognition.
Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang
2015-02-01
To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods.
A modified linear algebraic approach to electron scattering using cubic splines
International Nuclear Information System (INIS)
Kinney, R.A.
1986-01-01
A modified linear algebraic approach to the solution of the Schrodiner equation for low-energy electron scattering is presented. The method uses a piecewise cubic-spline approximation of the wavefunction. Results in the static-potential and the static-exchange approximations for e - +H s-wave scattering are compared with unmodified linear algebraic and variational linear algebraic methods. (author)
Validating the Multidimensional Spline Based Global Aerodynamic Model for the Cessna Citation II
De Visser, C.C.; Mulder, J.A.
2011-01-01
The validation of aerodynamic models created using flight test data is a time consuming and often costly process. In this paper a new method for the validation of global nonlinear aerodynamic models based on multivariate simplex splines is presented. This new method uses the unique properties of the
Nuclear data banks generation by interpolation
International Nuclear Information System (INIS)
Castillo M, J. A.
1999-01-01
Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks
Generation of nuclear data banks through interpolation
International Nuclear Information System (INIS)
Castillo M, J.A.
1999-01-01
Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)
Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year
Kamaruddin, Halim Shukri; Ismail, Noriszura
2014-06-01
Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.
B-spline design of digital FIR filter using evolutionary computation techniques
Swain, Manorama; Panda, Rutuparna
2011-10-01
In the forth coming era, digital filters are becoming a true replacement for the analog filter designs. Here in this paper we examine a design method for FIR filter using global search optimization techniques known as Evolutionary computation via genetic algorithm and bacterial foraging, where the filter design considered as an optimization problem. In this paper, an effort is made to design the maximally flat filters using generalized B-spline window. The key to our success is the fact that the bandwidth of the filer response can be modified by changing tuning parameters incorporated well within the B-spline function. This is an optimization problem. Direct approach has been deployed to design B-spline window based FIR digital filters. Four parameters (order, width, length and tuning parameter) have been optimized by using GA and EBFS. It is observed that the desired response can be obtained with lower order FIR filters with optimal width and tuning parameters.
Marginal longitudinal semiparametric regression via penalized splines
Al Kadiri, M.
2010-08-01
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
Gribov ambiguities at the Landau-maximal Abelian interpolating gauge
Energy Technology Data Exchange (ETDEWEB)
Pereira, Antonio D.; Sobreiro, Rodrigo F. [UFF-Universidade Federal Fluminense, Instituto de Fisica, Niteroi, RJ (Brazil)
2014-08-15
In a previous work, we presented a new method to account for the Gribov ambiguities in non-Abelian gauge theories. The method consists on the introduction of an extra constraint which directly eliminates the infinitesimal Gribov copies without the usual geometric approach. Such strategy allows one to treat gauges with non-hermitian Faddeev-Popov operator. In this work, we apply this method to a gauge which interpolates among the Landau and maximal Abelian gauges. The result is a local and power counting renormalizable action, free of infinitesimal Gribov copies. Moreover, the interpolating tree-level gluon propagator is derived. (orig.)
Potential problems with interpolating fields
Energy Technology Data Exchange (ETDEWEB)
Birse, Michael C. [The University of Manchester, Theoretical Physics Division, School of Physics and Astronomy, Manchester (United Kingdom)
2017-11-15
A potential can have features that do not reflect the dynamics of the system it describes but rather arise from the choice of interpolating fields used to define it. This is illustrated using a toy model of scattering with two coupled channels. A Bethe-Salpeter amplitude is constructed which is a mixture of the waves in the two channels. The potential derived from this has a strong repulsive core, which arises from the admixture of the closed channel in the wave function and not from the dynamics of the model. (orig.)
Modeling and testing treated tumor growth using cubic smoothing splines.
Kong, Maiying; Yan, Jun
2011-07-01
Human tumor xenograft models are often used in preclinical study to evaluate the therapeutic efficacy of a certain compound or a combination of certain compounds. In a typical human tumor xenograft model, human carcinoma cells are implanted to subjects such as severe combined immunodeficient (SCID) mice. Treatment with test compounds is initiated after tumor nodule has appeared, and continued for a certain time period. Tumor volumes are measured over the duration of the experiment. It is well known that untreated tumor growth may follow certain patterns, which can be described by certain mathematical models. However, the growth patterns of the treated tumors with multiple treatment episodes are quite complex, and the usage of parametric models is limited. We propose using cubic smoothing splines to describe tumor growth for each treatment group and for each subject, respectively. The proposed smoothing splines are quite flexible in modeling different growth patterns. In addition, using this procedure, we can obtain tumor growth and growth rate over time for each treatment group and for each subject, and examine whether tumor growth follows certain growth pattern. To examine the overall treatment effect and group differences, the scaled chi-squared test statistics based on the fitted group-level growth curves are proposed. A case study is provided to illustrate the application of this method, and simulations are carried out to examine the performances of the scaled chi-squared tests. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quadratic vs cubic spline-wavelets for image representations and compression
P.C. Marais; E.H. Blake; A.A.M. Kuijk (Fons)
1997-01-01
textabstractThe Wavelet Transform generates a sparse multi-scale signal representation which may be readily compressed. To implement such a scheme in hardware, one must have a computationally cheap method of computing the necessary transform data. The use of semi-orthogonal quadratic spline wavelets
Quadratic vs cubic spline-wavelets for image representation and compression
P.C. Marais; E.H. Blake; A.A.M. Kuijk (Fons)
1994-01-01
htmlabstractThe Wavelet Transform generates a sparse multi-scale signal representation which may be readily compressed. To implement such a scheme in hardware, one must have a computationally cheap method of computing the necessary ransform data. The use of semi-orthogonal quadratic spline wavelets
Importance of interpolation and coincidence errors in data fusion
Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana
2018-02-01
The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.
Resistor mesh model of a spherical head: part 1: applications to scalp potential interpolation.
Chauveau, N; Morucci, J P; Franceries, X; Celsis, P; Rigaud, B
2005-11-01
A resistor mesh model (RMM) has been implemented to describe the electrical properties of the head and the configuration of the intracerebral current sources by simulation of forward and inverse problems in electroencephalogram/event related potential (EEG/ERP) studies. For this study, the RMM representing the three basic tissues of the human head (brain, skull and scalp) was superimposed on a spherical volume mimicking the head volume: it included 43 102 resistances and 14 123 nodes. The validation was performed with reference to the analytical model by consideration of a set of four dipoles close to the cortex. Using the RMM and the chosen dipoles, four distinct families of interpolation technique (nearest neighbour, polynomial, splines and lead fields) were tested and compared so that the scalp potentials could be recovered from the electrode potentials. The 3D spline interpolation and the inverse forward technique (IFT) gave the best results. The IFT is very easy to use when the lead-field matrix between scalp electrodes and cortex nodes has been calculated. By simple application of the Moore-Penrose pseudo inverse matrix to the electrode cap potentials, a set of current sources on the cortex is obtained. Then, the forward problem using these cortex sources renders all the scalp potentials.
Optimal Approximation of Biquartic Polynomials by Bicubic Splines
Kačala, Viliam; Török, Csaba
2018-02-01
Recently an unexpected approximation property between polynomials of degree three and four was revealed within the framework of two-part approximation models in 2-norm, Chebyshev norm and Holladay seminorm. Namely, it was proved that if a two-component cubic Hermite spline's first derivative at the shared knot is computed from the first derivative of a quartic polynomial, then the spline is a clamped spline of class C2 and also the best approximant to the polynomial. Although it was known that a 2 × 2 component uniform bicubic Hermite spline is a clamped spline of class C2 if the derivatives at the shared knots are given by the first derivatives of a biquartic polynomial, the optimality of such approximation remained an open question. The goal of this paper is to resolve this problem. Unlike the spline curves, in the case of spline surfaces it is insufficient to suppose that the grid should be uniform and the spline derivatives computed from a biquartic polynomial. We show that the biquartic polynomial coefficients have to satisfy some additional constraints to achieve optimal approximation by bicubic splines.
Recursive B-spline approximation using the Kalman filter
Directory of Open Access Journals (Sweden)
Jens Jauch
2017-02-01
Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.
Size-Dictionary Interpolation for Robot's Adjustment
Directory of Open Access Journals (Sweden)
Morteza eDaneshmand
2015-05-01
Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.
Welch, J. A.; Kópházi, J.; Owens, A. R.; Eaton, M. D.
2017-10-01
In this paper a method is presented for the application of energy-dependent spatial meshes applied to the multigroup, second-order, even-parity form of the neutron transport equation using Isogeometric Analysis (IGA). The computation of the inter-group regenerative source terms is based on conservative interpolation by Galerkin projection. The use of Non-Uniform Rational B-splines (NURBS) from the original computer-aided design (CAD) model allows for efficient implementation and calculation of the spatial projection operations while avoiding the complications of matching different geometric approximations faced by traditional finite element methods (FEM). The rate-of-convergence was verified using the method of manufactured solutions (MMS) and found to preserve the theoretical rates when interpolating between spatial meshes of different refinements. The scheme's numerical efficiency was then studied using a series of two-energy group pincell test cases where a significant saving in the number of degrees-of-freedom can be found if the energy group with a complex variation in the solution is refined more than an energy group with a simpler solution function. Finally, the method was applied to a heterogeneous, seven-group reactor pincell where the spatial meshes for each energy group were adaptively selected for refinement. It was observed that by refining selected energy groups a reduction in the total number of degrees-of-freedom for the same total L2 error can be obtained.
Energy Technology Data Exchange (ETDEWEB)
Ruberti, M.; Averbukh, V. [Department of Physics, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Decleva, P. [Dipartimento di Scienze Chimiche, Universita’ di Trieste, Via Giorgieri 1, I-34127 Trieste (Italy)
2014-10-28
We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also present the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.
National Oceanic and Atmospheric Administration, Department of Commerce — The document presents the methods, formulas and citations used by the BNDO to process physical, chemical, and biological data for deep hydrology including...
Evaluation of various interpolants available in DICE
Energy Technology Data Exchange (ETDEWEB)
Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.
Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines
Barton, Michael
2015-10-24
We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.
Directory of Open Access Journals (Sweden)
Mingjian Sun
2015-01-01
Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.
Shah, Mazlina Muzafar; Wahab, Abdul Fatah
2017-08-01
Epilepsy disease occurs because of there is a temporary electrical disturbance in a group of brain cells (nurons). The recording of electrical signals come from the human brain which can be collected from the scalp of the head is called Electroencephalography (EEG). EEG then considered in digital format and in fuzzy form makes it a fuzzy digital space data form. The purpose of research is to identify the area (curve and surface) in fuzzy digital space affected by inside epilepsy seizure in epileptic patient's brain. The main focus for this research is to generalize fuzzy topological digital space, definition and basic operation also the properties by using digital fuzzy set and the operations. By using fuzzy digital space, the theory of digital fuzzy spline can be introduced to replace grid data that has been use previously to get better result. As a result, the flat of EEG can be fuzzy topological digital space and this type of data can be use to interpolate the digital fuzzy spline.
Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.
Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong
2017-11-01
Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.
Integration by cell algorithm for Slater integrals in a spline basis
International Nuclear Information System (INIS)
Qiu, Y.; Fischer, C.F.
1999-01-01
An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems
Trajectory control of an articulated robot with a parallel drive arm based on splines under tension
Yi, Seung-Jong
Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and
Discrete Sine Transform-Based Interpolation Filter for Video Compression
Directory of Open Access Journals (Sweden)
MyungJun Kim
2017-11-01
Full Text Available Fractional pixel motion compensation in high-efficiency video coding (HEVC uses an 8-point filter and a 7-point filter, which are based on the discrete cosine transform (DCT, for the 1/2-pixel and 1/4-pixel interpolations, respectively. In this paper, discrete sine transform (DST-based interpolation filters (DST-IFs are proposed for fractional pixel motion compensation in terms of coding efficiency improvement. Firstly, a performance of the DST-based interpolation filters (DST-IFs using 8-point and 7-point filters for the 1/2-pixel and 1/4-pixel interpolations is compared with that of the DCT-based IFs (DCT-IFs using 8-point and 7-point filters for the 1/2-pixel and 1/4-pixel interpolations, respectively, for fractional pixel motion compensation. Finally, the DST-IFs using 12-point and 11-point filters for the 1/2-pixel and 1/4-pixel interpolations, respectively, are proposed only for bi-directional motion compensation in terms of the coding efficiency. The 8-point and 7-point DST-IF methods showed average Bjøntegaard Delta (BD-rate reductions of 0.7% and 0.3% in the random access (RA and low delay B (LDB configurations, respectively, in HEVC. The 12-point and 11-point DST-IF methods showed average BD-rate reductions of 1.4% and 1.2% in the RA and LDB configurations for the Luma component, respectively, in HEVC.
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
About some properties of bivariate splines with shape parameters
Caliò, F.; Marchetti, E.
2017-07-01
The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.
On some interpolation properties in locally convex spaces
Energy Technology Data Exchange (ETDEWEB)
Pater, Flavius [Department of Mathematics, Politehnica University of Timişoara, 300004 Timişoara (Romania)
2015-03-10
The aim of this paper is to introduce the notion of interpolation between locally convex spaces, the real method, and to present some elementary results in this setting. This represents a generalization from the Banach spaces framework to the locally convex spaces sequentially complete one, where the operators acting on them are locally bounded.
Interpolation on sparse Gauss-Chebyshev grids in higher dimensions
F. Sprengel
1998-01-01
textabstractIn this paper, we give a unified approach to error estimates for interpolation on sparse Gauss--Chebyshev grids for multivariate functions from Besov--type spaces with dominating mixed smoothness properties. The error bounds obtained for this method are almost optimal for the considered
Interpolation solution of the single-impurity Anderson model
International Nuclear Information System (INIS)
Kuzemsky, A.L.
1990-10-01
The dynamical properties of the single-impurity Anderson model (SIAM) is studied using a novel Irreducible Green's Function method (IGF). The new solution for one-particle GF interpolating between the strong and weak correlation limits is obtained. The unified concept of relevant mean-field renormalizations is indispensable for strong correlation limit. (author). 21 refs
Genetic and environmental smoothing of lactation curves with cubic splines.
White, I M; Thompson, R; Brotherstone, S
1999-03-01
Most approaches to modeling lactation curves involve parametric curves with fixed or random coefficients. In either case, the resulting models require the specification on an underlying parametric curve. The fitting of splines represents a semiparametric approach to the problem. In the context of animal breeding, cubic smoothing splines are particularly convenient because they can be incorporated into a suitably constructed mixed model. The potential for the use of splines in modeling lactation curves is explored with a simple example, and the results are compared with those using a random regression model. The spline model provides greater flexibility at the cost of additional computation. Splines are shown to be capable of picking up features of the lactation curve that are missed by the random regression model.
Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian
2010-01-01
Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linke...
Differential Interpolation Effects in Free Recall
Petrusic, William M.; Jamieson, Donald G.
1978-01-01
Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…
Transfinite C2 interpolant over triangles
International Nuclear Information System (INIS)
Alfeld, P.; Barnhill, R.E.
1984-01-01
A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix
LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA
Directory of Open Access Journals (Sweden)
T. Dokken
2015-08-01
Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.
Julkunen, Petro
2014-07-30
Navigated transcranial magnetic stimulation (nTMS) is used for locating and outlining cortical representation areas, e.g., of motor function and speech. At present there are no standard methods of measuring the size of the cortical representation areas mapped with nTMS. The aim was to compare four computation methods for estimating muscle representation size and location for nTMS studies. The motor cortex of six subjects was mapped to outline the motor cortical representation of hand muscles. Four methods were compared to assess cortical representation size in nTMS. These methods included: (1) spline interpolation method, (2) convex hull method, which outlines all positive motor responses, (3) Voronoi tessellation method, which assigns a specific cortical area for each stimulus location, and (4) average point-area method, which computes an average representation area for each stimulus with the assumption of evenly spaced stimulus locations, i.e., the use of a grid. All applied methods demonstrated good repeatability in measuring muscle representation size and location, while the spline interpolation and the convex hull method demonstrated systematically larger representation areas (pmotor cortical muscle representation size and location with nTMS, e.g., to study cortical plasticity. Copyright © 2014 Elsevier B.V. All rights reserved.
Intensity-based hierarchical elastic registration using approximating splines.
Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C
2014-01-01
We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS
Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia
We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.
An Improved Rotary Interpolation Based on FPGA
Directory of Open Access Journals (Sweden)
Mingyu Gao
2014-08-01
Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.
Kisaka, M. Oscar; Mucheru-Muna, M.; Ngetich, F. K.; Mugwe, J.; Mugendi, D.; Mairura, F.; Shisanya, C.; Makokha, G. L.
2016-04-01
Drier parts of Kenya's Central Highlands endure persistent crop failure and declining agricultural productivity. These have, in part, attributed to high temperatures, prolonged dry spells and erratic rainfall. Understanding spatial-temporal variability of climatic indices such as rainfall at seasonal level is critical for optimal rain-fed agricultural productivity and natural resource management in the study area. However, the predominant setbacks in analysing hydro-meteorological events are occasioned by either lack, inadequate, or inconsistent meteorological data. Like in most other places, the sole sources of climatic data in the study region are scarce and only limited to single stations, yet with persistent missing/unrecorded data making their utilization a challenge. This study examined seasonal anomalies and variability in rainfall, drought occurrence and the efficacy of interpolation techniques in the drier regions of eastern Kenyan. Rainfall data from five stations (Machang'a, Kiritiri, Kiambere and Kindaruma and Embu) were sourced from both the Kenya Meteorology Department and on-site primary recording. Owing to some experimental work ongoing, automated recording for primary dailies in Machang'a have been ongoing since the year 2000 to date; thus, Machang'a was treated as reference (for period of record) station for selection of other stations in the region. The other stations had data sets of over 15 years with missing data of less than 10 % as required by the world meteorological organization whose quality check is subject to the Centre for Climate Systems Modeling (C2SM) through MeteoSwiss and EMPA bodies. The dailies were also subjected to homogeneity testing to evaluate whether they came from the same population. Rainfall anomaly index, coefficients of variance and probability were utilized in the analyses of rainfall variability. Spline, kriging and inverse distance weighting interpolation techniques were assessed using daily rainfall data and
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Matching interpolation of CT faulted images based on corresponding object
International Nuclear Information System (INIS)
Chen Lingna
2005-01-01
For CT faulted images interpolation this paper presents a corresponding pint matching interpolation algorithm, which is based on object feature. Compared with the traditional interpolation algorithms, the new algorithm improves visual effect and its interpolation error. The computer experiments show that the algorithm can effectively improve the interpolation quality, especially more clear scene at the boundary. (authors)
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather
Lakestani, Mehrdad; Dehghan, Mehdi
2010-05-01
Two numerical techniques are presented for solving the solution of Riccati differential equation. These methods use the cubic B-spline scaling functions and Chebyshev cardinal functions. The methods consist of expanding the required approximate solution as the elements of cubic B-spline scaling function or Chebyshev cardinal functions. Using the operational matrix of derivative, we reduce the problem to a set of algebraic equations. Some numerical examples are included to demonstrate the validity and applicability of the new techniques. The methods are easy to implement and produce very accurate results.
Construction of fractal surfaces by recurrent fractal interpolation curves
International Nuclear Information System (INIS)
Yun, Chol-hui; O, Hyong-chol; Choi, Hui-chol
2014-01-01
A method to construct fractal surfaces by recurrent fractal curves is provided. First we construct fractal interpolation curves using a recurrent iterated functions system (RIFS) with function scaling factors and estimate their box-counting dimension. Then we present a method of construction of wider class of fractal surfaces by fractal curves and Lipschitz functions and calculate the box-counting dimension of the constructed surfaces. Finally, we combine both methods to have more flexible constructions of fractal surfaces
Discrete quintic spline for boundary value problem in plate deflation theory
Wong, Patricia J. Y.
2017-07-01
We propose a numerical scheme for a fourth-order boundary value problem arising from plate deflation theory. The scheme involves a discrete quintic spline, and it is of order 4 if a parameter takes a specific value, else it is of order 2. We also present a well known numerical example to illustrate the efficiency of our method as well as to compare with other numerical methods proposed in the literature.
Spline based iterative phase retrieval algorithm for X-ray differential phase contrast radiography.
Nilchian, Masih; Wang, Zhentian; Thuering, Thomas; Unser, Michael; Stampanoni, Marco
2015-04-20
Differential phase contrast imaging using grating interferometer is a promising alternative to conventional X-ray radiographic methods. It provides the absorption, differential phase and scattering information of the underlying sample simultaneously. Phase retrieval from the differential phase signal is an essential problem for quantitative analysis in medical imaging. In this paper, we formalize the phase retrieval as a regularized inverse problem, and propose a novel discretization scheme for the derivative operator based on B-spline calculus. The inverse problem is then solved by a constrained regularized weighted-norm algorithm (CRWN) which adopts the properties of B-spline and ensures a fast implementation. The method is evaluated with a tomographic dataset and differential phase contrast mammography data. We demonstrate that the proposed method is able to produce phase image with enhanced and higher soft tissue contrast compared to conventional absorption-based approach, which can potentially provide useful information to mammographic investigations.
Finite nucleus Dirac mean field theory and random phase approximation using finite B splines
International Nuclear Information System (INIS)
McNeil, J.A.; Furnstahl, R.J.; Rost, E.; Shepard, J.R.; Department of Physics, University of Maryland, College Park, Maryland 20742; Department of Physics, University of Colorado, Boulder, Colorado 80309)
1989-01-01
We calculate the finite nucleus Dirac mean field spectrum in a Galerkin approach using finite basis splines. We review the method and present results for the relativistic σ-ω model for the closed-shell nuclei 16 O and 40 Ca. We study the convergence of the method as a function of the size of the basis and the closure properties of the spectrum using an energy-weighted dipole sum rule. We apply the method to the Dirac random-phase-approximation response and present results for the isoscalar 1/sup -/ and 3/sup -/ longitudinal form factors of 16 O and 40 Ca. We also use a B-spline spectral representation of the positive-energy projector to evaluate partial energy-weighted sum rules and compare with nonrelativistic sum rule results
Clustering metagenomic sequences with interpolated Markov models
Directory of Open Access Journals (Sweden)
Kelley David R
2010-11-01
Full Text Available Abstract Background Sequencing of environmental DNA (often called metagenomics has shown tremendous potential to uncover the vast number of unknown microbes that cannot be cultured and sequenced by traditional methods. Because the output from metagenomic sequencing is a large set of reads of unknown origin, clustering reads together that were sequenced from the same species is a crucial analysis step. Many effective approaches to this task rely on sequenced genomes in public databases, but these genomes are a highly biased sample that is not necessarily representative of environments interesting to many metagenomics projects. Results We present SCIMM (Sequence Clustering with Interpolated Markov Models, an unsupervised sequence clustering method. SCIMM achieves greater clustering accuracy than previous unsupervised approaches. We examine the limitations of unsupervised learning on complex datasets, and suggest a hybrid of SCIMM and supervised learning method Phymm called PHYSCIMM that performs better when evolutionarily close training genomes are available. Conclusions SCIMM and PHYSCIMM are highly accurate methods to cluster metagenomic sequences. SCIMM operates entirely unsupervised, making it ideal for environments containing mostly novel microbes. PHYSCIMM uses supervised learning to improve clustering in environments containing microbial strains from well-characterized genera. SCIMM and PHYSCIMM are available open source from http://www.cbcb.umd.edu/software/scimm.
The Grand Tour via Geodesic Interpolation of 2-frames
Asimov, Daniel; Buja, Andreas
1994-01-01
Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.
Interpol: An R package for preprocessing of protein sequences
Directory of Open Access Journals (Sweden)
Heider Dominik
2011-06-01
Full Text Available Abstract Background Most machine learning techniques currently applied in the literature need a fixed dimensionality of input data. However, this requirement is frequently violated by real input data, such as DNA and protein sequences, that often differ in length due to insertions and deletions. It is also notable that performance in classification and regression is often improved by numerical encoding of amino acids, compared to the commonly used sparse encoding. Results The software "Interpol" encodes amino acid sequences as numerical descriptor vectors using a database of currently 532 descriptors (mainly from AAindex, and normalizes sequences to uniform length with one of five linear or non-linear interpolation algorithms. Interpol is distributed with open source as platform independent R-package. It is typically used for preprocessing of amino acid sequences for classification or regression. Conclusions The functionality of Interpol widens the spectrum of machine learning methods that can be applied to biological sequences, and it will in many cases improve their performance in classification and regression.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Interpolation Algorithm and Mathematical Model in Automated Welding of Saddle-Shaped Weld
Directory of Open Access Journals (Sweden)
Lianghao Xue
2018-01-01
Full Text Available This paper presents welding torch pose model and interpolation algorithm of trajectory control of saddle-shaped weld formed by intersection of two pipes; the working principle, interpolation algorithm, welding experiment, and simulation result of the automatic welding system of the saddle-shaped weld are described. A variable angle interpolation method is used to control the trajectory and pose of the welding torch, which guarantees the constant linear terminal velocity. The mathematical model of the trajectory and pose of welding torch are established. Simulation and experiment have been carried out to verify the effectiveness of the proposed algorithm and mathematical model. The results demonstrate that the interpolation algorithm is well within the interpolation requirements of the saddle-shaped weld and ideal feed rate stability.
Spatiotemporal video deinterlacing using control grid interpolation
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering
2005-01-01
Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"
Interpol pidas mõttetalguid / Allan Espenberg
Espenberg, Allan
2008-01-01
Maailma kriminaalspetsialistid tulid Venemaal kokku, et valida rahvusvahelisele kriminaalpolitsei organisatsioonile Interpol uus juhtkond ning määrata kindlaks oma lähemad ja kaugemad tööülesanded
NOAA Optimum Interpolation (OI) SST V2
National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...
Directory of Open Access Journals (Sweden)
Joan Goh
Full Text Available Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method.
Survival estimation through the cumulative hazard function with monotone natural cubic splines.
Bantis, Leonidas E; Tsimikas, John V; Georgiou, Stelios D
2012-07-01
In this paper we explore the estimation of survival probabilities via a smoothed version of the survival function, in the presence of censoring. We investigate the fit of a natural cubic spline on the cumulative hazard function under appropriate constraints. Under the proposed technique the problem reduces to a restricted least squares one, leading to convex optimization. The approach taken in this paper is evaluated and compared via simulations to other known methods such as the Kaplan Meier and the logspline estimator. Our approach is easily extended to address estimation of survival probabilities in the presence of covariates when the proportional hazards model assumption holds. In this case the method is compared to a restricted cubic spline approach that involves maximum likelihood. The proposed approach can be also adjusted to accommodate left censoring.
Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set
Directory of Open Access Journals (Sweden)
M Barezi
2011-03-01
Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.
Identification of Hammerstein models with cubic spline nonlinearities.
Dempsey, Erika J; Westwick, David T
2004-02-01
This paper considers the use of cubic splines, instead of polynomials, to represent the static nonlinearities in block structured models. It introduces a system identification algorithm for the Hammerstein structure, a static nonlinearity followed by a linear filter, where cubic splines represent the static nonlinearity and the linear dynamics are modeled using a finite impulse response filter. The algorithm uses a separable least squares Levenberg-Marquardt optimization to identify Hammerstein cascades whose nonlinearities are modeled by either cubic splines or polynomials. These algorithms are compared in simulation, where the effects of variations in the input spectrum and distribution, and those of the measurement noise are examined. The two algorithms are used to fit Hammerstein models to stretch reflex electromyogram (EMG) data recorded from a spinal cord injured patient. The model with the cubic spline nonlinearity provides more accurate predictions of the reflex EMG than the polynomial based model, even in novel data.
Interpolation for a subclass of H
Indian Academy of Sciences (India)
|g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.
Decomposition of LiDAR waveforms by B-spline-based modeling
Shen, Xiang; Li, Qing-Quan; Wu, Guofeng; Zhu, Jiasong
2017-06-01
the extracted echo parameters were clearly inaccurate and unreliable. The B-spline-based method performed significantly better than the Gaussian and lognormal models by reducing 45.5% and 11.5% of their fitting errors, respectively. Much more precise echo properties can accordingly be retrieved with a high probability. Benefiting from the flexibility of B-splines on fitting arbitrary curves, the new method has the potentiality for accurately modeling various full-waveform LiDAR data, whether they are nearly Gaussian or non-Gaussian in shape.
Segmented Regression Based on B-Splines with Solved Examples
Directory of Open Access Journals (Sweden)
Miloš Kaňka
2015-12-01
Full Text Available The subject of the paper is segmented linear, quadratic, and cubic regression based on B-spline basis functions. In this article we expose the formulas for the computation of B-splines of order one, two, and three that is needed to construct linear, quadratic, and cubic regression. We list some interesting properties of these functions. For a clearer understanding we give the solutions of a couple of elementary exercises regarding these functions.
A fourth order spline collocation approach for a business cycle model
Sayfy, A.; Khoury, S.; Ibdah, H.
2013-10-01
A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.
Directory of Open Access Journals (Sweden)
Giselle Sabadim Saraiva
2017-03-01
Full Text Available In the state of Espírito Santo it is notorious the importance of agriculture for the State economy, several crops have been irrigated due to irregular distribution of rainfall. The daily reference evapotranspiration (ETo is an important variable in irrigation management, making it possible to quantify the water demand of a culture and region. This study aimed to compare interpolation methods to the spatial distribution of daily ETo. The study included the state of Espírito Santo, Brazil, with a total area of 46,184.1 km2 . Fifteen automatic meteorological stations were selected as the basis for interpolation and ten for cross-validation. The daily data analyzed were from the period 2010-2012, using three methods of interpolation, employed: the Kriging geostatistical method and deterministic methods Distance Square Inverse (IQD and Spline Tensioned. Three sets of data were interpolated. The IQD interpolation presented the best performance for the variable among the three methods, presenting lower deviation and variation among the ETo values estimated by the Penman-Monteith method. The IQD interpolator showed, as a good method to estimate the daily reference evapotranspiration variable in places where do not have weather stations installed. The estimated values of ETo obtained by IQD interpolation method can be used with confidence in irrigation management. = No estado do Espírito Santo é notório o destaque da agricultura na sua economia, mas, devido a irregularidade na distribuição das chuvas, as culturas têm sido irrigadas, sendo assim, o manejo da irrigação se torna importante para a produção das culturas. A evapotranspiração de referência (ETo diária é uma variável importante no manejo da irrigação, possibilitando quantificar a demanda hídrica de uma cultura e região. Objetivou-se com este trabalho comparar métodos de interpolação visando à espacialização da ETo diária. A área de abrangência do estudo foi o
Directory of Open Access Journals (Sweden)
Pengyun Chen
2014-01-01
Full Text Available The interpolation-reconstruction of local underwater terrain using the underwater digital terrain map (UDTM is an important step for building an underwater terrain matching unit and directly affects the accuracy of underwater terrain matching navigation. The Kriging method is often used in terrain interpolation, but, with this method, the local terrain features are often lost. Therefore, the accuracy cannot meet the requirements of practical application. Analysis of the geographical features is performed on the basis of the randomness and self-similarity of underwater terrain. We extract the fractal features of local underwater terrain with the fractal Brownian motion model, compensating for the possible errors of the Kriging method with fractal theory. We then put forward an improved Kriging interpolation method based on this fractal compensation. Interpolation-reconstruction tests show that the method can simulate the real underwater terrain features well and that it has good usability.
Curvature-Continuous 3D Path-Planning Using QPMI Method
Directory of Open Access Journals (Sweden)
Seong-Ryong Chang
2015-06-01
Full Text Available It is impossible to achieve vertex movement and rapid velocity control in aerial robots and aerial vehicles because of momentum from the air. A continuous-curvature path ensures such robots and vehicles can fly with stable and continuous movements. General continuous path-planning methods use spline interpolation, for example B-spline and Bézier curves. However, these methods cannot be directly applied to continuous path planning in a 3D space. These methods use a subset of the waypoints to decide curvature and some waypoints are not included in the planned path. This paper proposes a method for constructing a curvature-continuous path in 3D space that includes every waypoint. The movements in each axis, x, y and z, are separated by the parameter u. Waypoint groups are formed, each with its own continuous path derived using quadratic polynomial interpolation. The membership function then combines each continuous path into one continuous path. The continuity of the path is verified and the curvature-continuous path is produced using the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Araujo, Carlos Eduardo S. [Universidade Federal de Campina Grande, PB (Brazil). Programa de Recursos Humanos 25 da ANP]. E-mail: carlos@dme.ufcg.edu.br; Silva, Rosana M. da [Universidade Federal de Campina Grande, PB (Brazil). Dept. de Matematica e Estatistica]. E-mail: rosana@dme.ufcg.edu.br
2004-07-01
This work presents an implementation of a synthetic model of a channel found in oil reservoir. The generation these models is one of the steps to the characterization and simulation of the equal probable three-dimensional geological scenery. O implemented model was obtained from fitting techniques of geometric modeling of curves and surfaces to the geological parameters (width, thickness, sinuosity and preferential direction) that defines the form to be modeled. The parameter sinuosity is related with the parameter wave length and the local amplitude of the channel, the parameter preferential direction indicates the way of the flow and the declivity of the channel. The modeling technique used to represent the surface of the channel is the sweeping technique, the consist in effectuate a translation operation from a curve along a guide curve. The guide curve, in our implementation, was generated by the interpolation of points obtained form sampled values or simulated of the parameter sinuosity, using the cubic splines of Bezier technique. A semi-ellipse, determinate by the parameter width and thickness, representing a transversal section of the channel, is the transferred curve through the guide curve, generating the channel surface. (author)
Sumantari, Y. D.; Slamet, I.; Sugiyanto
2017-06-01
Semiparametric regression is a statistical analysis method that consists of parametric and nonparametric regression. There are various approach techniques in nonparametric regression. One of the approach techniques is spline. Central Java is one of the most densely populated province in Indonesia. Population density in this province can be modeled by semiparametric regression because it consists of parametric and nonparametric component. Therefore, the purpose of this paper is to determine the factors that in uence population density in Central Java using the semiparametric spline regression model. The result shows that the factors which in uence population density in Central Java is Family Planning (FP) active participants and district minimum wage.
Bolard, P; Quantin, C; Abrahamowicz, M; Esteve, J; Giorgi, R; Chadha-Boreham, H; Binquet, C; Faivre, J
2002-01-01
The Cox model is widely used in the evaluation of prognostic factors in clinical research. However, in population-based studies, which assess long-term survival of unselected populations, relative-survival models are often considered more appropriate. In both approaches, the validity of proportional hazards hypothesis should be evaluated. We propose a new method in which restricted cubic spline functions are employed to model time-by-covariate interactions in relative survival analyses. The method allows investigation of the shape of possible dependence of the covariate effect on time without having to specify a particular functional form. Restricted cubic spline functions allow graphing of such time-by-covariate interactions, to test formally the proportional hazards assumption, and also to test the linearity of the time-by-covariate interaction. Application of our new method to assess mortality in colon cancer provides strong evidence against the proportional hazards hypothesis, which is rejected for all prognostic factors. The results corroborate previous analyses of similar data-sets, suggesting the importance of both modelling of non-proportional hazards and relative survival approach. We also demonstrate the advantages of using restricted cubic spline functions for modelling non-proportional hazards in relative-survival analysis. The results provide new insights in the estimated impact of older age and of period of diagnosis. Using restricted cubic splines in a relative survival model allows the representation of both simple and complex patterns of changes in relative risks over time, with a single parsimonious model without a priori assumptions about the functional form of these changes.
1989-11-01
designated by other authorized documents. Citation of trade names in this report does not constitute an official endorsement or approval of the use...CODES 1. SUBJECT TERMo (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP CAMOUFLAGE, COLOR, INFRARED, COLOR...reflectance, as well) were tried. Additional computer programs designed to search out the source(s) of the error(s) were written. However, consistent
Chen, Shyi-Ming; Hsin, Wen-Chyuan
2015-07-01
In this paper, we propose a new weighted fuzzy interpolative reasoning method for sparse fuzzy rule-based systems based on the slopes of fuzzy sets. We also propose a particle swarm optimization (PSO)-based weights-learning algorithm to automatically learn the optimal weights of the antecedent variables of fuzzy rules for weighted fuzzy interpolative reasoning. We apply the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm to deal with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems. The experimental results show that the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm outperforms the existing methods for dealing with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems.
Functions with disconnected spectrum sampling, interpolation, translates
Olevskii, Alexander M
2016-01-01
The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...
Spatial interpolation mthods for integrating Newton's equation
International Nuclear Information System (INIS)
Gueron, S.; Shalloway, D.
1996-01-01
Numerical integration of Newton's equation in multiple dimensions plays an important role in many fields such as biochemistry and astrophysics. Currently, some of the most important practical questions in these areas cannot be addressed because the large dimensionality of the variable space and complexity of the required force evaluations precludes integration over sufficiently large time intervals. Improving the efficiency of algorithms for this purpose is therefore of great importance. Standard numerical integration schemes (e.g., leap-frog and Runge-Kutta) ignore the special structure of Newton's equation that, for conservative systems, constrains the force to be the gradient of a scalar potential. We propose a new class of open-quotes spatial interpolationclose quotes (SI) integrators that exploit this property by interpolating the force in space rather than (as with standard methods) in time. Since the force is usually a smoother function of space than of time, this can improve algorithmic efficiency and accuracy. In particular, an SI integrator solves the one- and two-dimensional harmonic oscillators exactly with one force evaluation per step. A simple type of time-reversible SI algorithm is described and tested. Significantly improved performance is achieved on one- and multi-dimensional benchmark problems. 19 refs., 4 figs., 1 tab
On developing B-spline registration algorithms for multi-core processors.
Shackleford, J A; Kandasamy, N; Sharp, G C
2010-11-07
Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.
On developing B-spline registration algorithms for multi-core processors
International Nuclear Information System (INIS)
Shackleford, J A; Kandasamy, N; Sharp, G C
2010-01-01
Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.
Accurate interpolation of 3D fields in charged particle optics.
Horák, Michal; Badin, Viktor; Zlámal, Jakub
2018-03-29
Standard 3D interpolation polynomials often suffer from numerical errors of the calculated field and lack of node points in the 3D solution. We introduce a novel method for accurate and smooth interpolation of arbitrary electromagnetic fields in the vicinity of the optical axis valid up to 90% of the bore radius. Our method combines Fourier analysis and Gaussian wavelet interpolation and provides the axial multipole field functions and their derivatives analytically. The results are accurate and noiseless, usually up to the 5th derivative. This is very advantageous for further applications, such as accurate particle tracing, and evaluation of aberration coefficients and other optical properties. The proposed method also enables studying the strength and orientation of all multipole field components. To illustrate the capabilities of the proposed algorithm, we present three examples: a magnetic lens with a hole in the polepiece, a saturated magnetic lens with an elliptic polepiece, and an electrostatic 8-electrode multipole. Copyright © 2018 Elsevier B.V. All rights reserved.
MODIS Snow Cover Recovery Using Variational Interpolation
Tran, H.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.
2017-12-01
Cloud obscuration is one of the major problems that limit the usages of satellite images in general and in NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) global Snow-Covered Area (SCA) products in particular. Among the approaches to resolve the problem, the Variational Interpolation (VI) algorithm method, proposed by Xia et al., 2012, obtains cloud-free dynamic SCA images from MODIS. This method is automatic and robust. However, computational deficiency is a main drawback that degrades applying the method for larger scales (i.e., spatial and temporal scales). To overcome this difficulty, this study introduces an improved version of the original VI. The modified VI algorithm integrates the MINimum RESidual (MINRES) iteration (Paige and Saunders., 1975) to prevent the system from breaking up when applied to much broader scales. An experiment was done to demonstrate the crash-proof ability of the new algorithm in comparison with the original VI method, an ability that is obtained when maintaining the distribution of the weights set after solving the linear system. After that, the new VI algorithm was applied to the whole Contiguous United States (CONUS) over four winter months of 2016 and 2017, and validated using the snow station network (SNOTEL). The resulting cloud free images have high accuracy in capturing the dynamical changes of snow in contrast with the MODIS snow cover maps. Lastly, the algorithm was applied to create a Cloud free images dataset from March 10, 2000 to February 28, 2017, which is able to provide an overview of snow trends over CONUS for nearly two decades. ACKNOWLEDGMENTSWe would like to acknowledge NASA, NOAA Office of Hydrologic Development (OHD) National Weather Service (NWS), Cooperative Institute for Climate and Satellites (CICS), Army Research Office (ARO), ICIWaRM, and UNESCO for supporting this research.