WorldWideScience

Sample records for cubic spline interpolation

  1. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  2. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  3. Interpolation of natural cubic spline

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    1992-01-01

    Full Text Available From the result in [1] it follows that there is a unique quadratic spline which bounds the same area as that of the function. The matching of the area for the cubic spline does not follow from the corresponding result proved in [2]. We obtain cubic splines which preserve the area of the function.

  4. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  5. Cubic-spline interpolation to estimate effects of inbreeding on milk yield in first lactation Holstein cows

    Directory of Open Access Journals (Sweden)

    Makram J. Geha

    2011-01-01

    Full Text Available Milk yield records (305d, 2X, actual milk yield of 123,639 registered first lactation Holstein cows were used to compare linear regression (y = β0 + β1X + e ,quadratic regression, (y = β0 + β1X + β2X2 + e cubic regression (y = β0 + β1X + β2X2 + β3X3 + e and fixed factor models, with cubic-spline interpolation models, for estimating the effects of inbreeding on milk yield. Ten animal models, all with herd-year-season of calving as fixed effect, were compared using the Akaike corrected-Information Criterion (AICc. The cubic-spline interpolation model with seven knots had the lowest AICc, whereas for all those labeled as "traditional", AICc was higher than the best model. Results from fitting inbreeding using a cubic-spline with seven knots were compared to results from fitting inbreeding as a linear covariate or as a fixed factor with seven levels. Estimates of inbreeding effects were not significantly different between the cubic-spline model and the fixed factor model, but were significantly different from the linear regression model. Milk yield decreased significantly at inbreeding levels greater than 9%. Variance component estimates were similar for the three models. Ranking of the top 100 sires with daughter records remained unaffected by the model used.

  6. Efficient GPU-based texture interpolation using uniform B-splines

    NARCIS (Netherlands)

    Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

    2008-01-01

    This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

  7. Interpolating cubic splines

    CERN Document Server

    Knott, Gary D

    2000-01-01

    A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi­ cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi­ nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline func...

  8. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  9. C2-rational cubic spline involving tension parameters

    Indian Academy of Sciences (India)

    preferred which preserves some of the characteristics of the function to be interpolated. In order to tackle such ... Shape preserving properties of the rational (cubic/quadratic) spline interpolant have been studied ... tension parameters which is used to interpolate the given monotonic data is described in. [6]. Shape preserving ...

  10. Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Budi I. Setiawan

    2007-09-01

    Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.

  11. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  12. Spline Interpolation of Image

    OpenAIRE

    I. Kuba; J. Zavacky; J. Mihalik

    1995-01-01

    This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.

  13. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  14. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  15. Conformal Interpolating Algorithm Based on Cubic NURBS in Aspheric Ultra-Precision Machining

    International Nuclear Information System (INIS)

    Li, C G; Zhang, Q R; Cao, C G; Zhao, S L

    2006-01-01

    Numeric control machining and on-line compensation for aspheric surface are key techniques in ultra-precision machining. In this paper, conformal cubic NURBS interpolating curve is applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal cubic NURBS interpolation, we compare it with the linear interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by cubic NURBS is higher than by line. The algorithm is benefit to increasing the surface form precision of workpieces in ultra-precision machining

  16. Quasi interpolation with Voronoi splines.

    Science.gov (United States)

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  17. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    Science.gov (United States)

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  18. Illumination estimation via thin-plate spline interpolation.

    Science.gov (United States)

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  19. Spline-procedures

    International Nuclear Information System (INIS)

    Schmidt, R.

    1976-12-01

    This report contains a short introduction to spline functions as well as a complete description of the spline procedures presently available in the HMI-library. These include polynomial splines (using either B-splines or one-sided basis representations) and natural splines, as well as their application to interpolation, quasiinterpolation, L 2 -, and Tchebycheff approximation. Special procedures are included for the case of cubic splines. Complete test examples with input and output are provided for each of the procedures. (orig.) [de

  20. A Note on Cubic Convolution Interpolation

    OpenAIRE

    Meijering, E.; Unser, M.

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  1. Analysis of ECT Synchronization Performance Based on Different Interpolation Methods

    Directory of Open Access Journals (Sweden)

    Yang Zhixin

    2014-01-01

    Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.

  2. Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines

    KAUST Repository

    Barton, Michael

    2015-10-24

    We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.

  3. Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines

    KAUST Repository

    Barton, Michael; Calo, Victor M.

    2015-01-01

    We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.

  4. Modeling the dispersion of atmospheric pollution using cubic splines and chapeau functions

    Energy Technology Data Exchange (ETDEWEB)

    Pepper, D W; Kern, C D; Long, P E

    1979-01-01

    Two methods that can be used to solve complex, three-dimensional, advection-diffusion transport equations are investigated. A quasi-Lagrangian cubic spline method and a chapeau function method are compared in advecting a passive scalar. The methods are simple to use, computationally fast, and reasonably accurate. Little numerical dissipation is manifested by the schemes. In simple advection tests with equal mesh spacing, the chapeau function method maintains slightly more accurate peak values than the cubic spline method. In tests with unequal mesh spacing, the cubic spline method has less noise, but slightly more damping than the standard chapeau method has. Both cubic splines and chapeau functions can be used to solve the three-dimensional problem of gaseous emissions dispersion without excessive programing complexity or storage requirements. (10 diagrams, 39 references, 2 tables)

  5. Limit Stress Spline Models for GRP Composites | Ihueze | Nigerian ...

    African Journals Online (AJOL)

    Spline functions were established on the assumption of three intervals and fitting of quadratic and cubic splines to critical stress-strain responses data. Quadratic ... of data points. Spline model is therefore recommended as it evaluates the function at subintervals, eliminating the error associated with wide range interpolation.

  6. Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines

    Science.gov (United States)

    Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya

    2017-11-01

    Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1

  7. Application of Cubic Box Spline Wavelets in the Analysis of Signal Singularities

    Directory of Open Access Journals (Sweden)

    Rakowski Waldemar

    2015-12-01

    Full Text Available In the subject literature, wavelets such as the Mexican hat (the second derivative of a Gaussian or the quadratic box spline are commonly used for the task of singularity detection. The disadvantage of the Mexican hat, however, is its unlimited support; the disadvantage of the quadratic box spline is a phase shift introduced by the wavelet, making it difficult to locate singular points. The paper deals with the construction and properties of wavelets in the form of cubic box splines which have compact and short support and which do not introduce a phase shift. The digital filters associated with cubic box wavelets that are applied in implementing the discrete dyadic wavelet transform are defined. The filters and the algorithme à trous of the discrete dyadic wavelet transform are used in detecting signal singularities and in calculating the measures of signal singularities in the form of a Lipschitz exponent. The article presents examples illustrating the use of cubic box spline wavelets in the analysis of signal singularities.

  8. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    Science.gov (United States)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  9. Research on interpolation methods in medical image processing.

    Science.gov (United States)

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  10. C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization

    Directory of Open Access Journals (Sweden)

    Shengjun Liu

    2015-01-01

    Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.

  11. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    Science.gov (United States)

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  12. Preconditioning cubic spline collocation method by FEM and FDM for elliptic equations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Dong [KyungPook National Univ., Taegu (Korea, Republic of)

    1996-12-31

    In this talk we discuss the finite element and finite difference technique for the cubic spline collocation method. For this purpose, we consider the uniformly elliptic operator A defined by Au := -{Delta}u + a{sub 1}u{sub x} + a{sub 2}u{sub y} + a{sub 0}u in {Omega} (the unit square) with Dirichlet or Neumann boundary conditions and its discretization based on Hermite cubic spline spaces and collocation at the Gauss points. Using an interpolatory basis with support on the Gauss points one obtains the matrix A{sub N} (h = 1/N).

  13. Interpolating Spline Curve-Based Perceptual Encryption for 3D Printing Models

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the development of 3D printing technology, 3D printing has recently been applied to many areas of life including healthcare and the automotive industry. Due to the benefit of 3D printing, 3D printing models are often attacked by hackers and distributed without agreement from the original providers. Furthermore, certain special models and anti-weapon models in 3D printing must be protected against unauthorized users. Therefore, in order to prevent attacks and illegal copying and to ensure that all access is authorized, 3D printing models should be encrypted before being transmitted and stored. A novel perceptual encryption algorithm for 3D printing models for secure storage and transmission is presented in this paper. A facet of 3D printing model is extracted to interpolate a spline curve of degree 2 in three-dimensional space that is determined by three control points, the curvature coefficients of degree 2, and an interpolating vector. Three control points, the curvature coefficients, and interpolating vector of the spline curve of degree 2 are encrypted by a secret key. The encrypted features of the spline curve are then used to obtain the encrypted 3D printing model by inverse interpolation and geometric distortion. The results of experiments and evaluations prove that the entire 3D triangle model is altered and deformed after the perceptual encryption process. The proposed algorithm is responsive to the various formats of 3D printing models. The results of the perceptual encryption process is superior to those of previous methods. The proposed algorithm also provides a better method and more security than previous methods.

  14. Smoothing data series by means of cubic splines: quality of approximation and introduction of a repeating spline approach

    Science.gov (United States)

    Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael

    2017-09-01

    Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.

  15. Evaluation of Teeth and Supporting Structures on Digital Radiograms using Interpolation Methods

    International Nuclear Information System (INIS)

    Koh, Kwang Joon; Chang, Kee Wan

    1999-01-01

    To determine the effect of interpolation functions when processing the digital periapical images. The digital images were obtained by Digora and CDR system on the dry skull and human subject. 3 oral radiologists evaluated the 3 portions of each processed image using 7 interpolation methods and ROC curves were obtained by trapezoidal methods. The highest Az value(0.96) was obtained with cubic spline method and the lowest Az value(0.03) was obtained with facet model method in Digora system. The highest Az value(0.79) was obtained with gray segment expansion method and the lowest Az value(0.07) was obtained with facet model method in CDR system. There was significant difference of Az value in original image between Digora and CDR system at alpha=0.05 level. There were significant differences of Az values between Digora and CDR images with cubic spline method, facet model method, linear interpolation method and non-linear interpolation method at alpha= 0.1 level.

  16. A meshless scheme for partial differential equations based on multiquadric trigonometric B-spline quasi-interpolation

    International Nuclear Information System (INIS)

    Gao Wen-Wu; Wang Zhi-Gang

    2014-01-01

    Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions. (general)

  17. On the accurate fast evaluation of finite Fourier integrals using cubic splines

    International Nuclear Information System (INIS)

    Morishima, N.

    1993-01-01

    Finite Fourier integrals based on a cubic-splines fit to equidistant data are shown to be evaluated fast and accurately. Good performance, especially on computational speed, is achieved by the optimization of the spline fit and the internal use of the fast Fourier transform (FFT) algorithm for complex data. The present procedure provides high accuracy with much shorter CPU time than a trapezoidal FFT. (author)

  18. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  19. Curvelet-domain multiple matching method combined with cubic B-spline function

    Science.gov (United States)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  20. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  1. Interpolation for de-Dopplerisation

    Science.gov (United States)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  2. Multi-dimensional cubic interpolation for ICF hydrodynamics simulation

    International Nuclear Information System (INIS)

    Aoki, Takayuki; Yabe, Takashi.

    1991-04-01

    A new interpolation method is proposed to solve the multi-dimensional hyperbolic equations which appear in describing the hydrodynamics of inertial confinement fusion (ICF) implosion. The advection phase of the cubic-interpolated pseudo-particle (CIP) is greatly improved, by assuming the continuities of the second and the third spatial derivatives in addition to the physical value and the first derivative. These derivatives are derived from the given physical equation. In order to evaluate the new method, Zalesak's example is tested, and we obtain successfully good results. (author)

  3. Analysis of Spatial Interpolation in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2010-01-01

    are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...

  4. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    Science.gov (United States)

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  5. Improvement of the Cubic Spline Function Sets for a Synthesis of the Axial Power Distribution of a Core Protection System

    International Nuclear Information System (INIS)

    Koo, Bon-Seung; Lee, Chung-Chan; Zee, Sung-Quun

    2006-01-01

    Online digital core protection system(SCOPS) for a system-integrated modular reactor is being developed as a part of a plant protection system at KAERI. SCOPS calculates the minimum CHFR and maximum LPD based on several online measured system parameters including 3-level ex-core detector signals. In conventional ABB-CE digital power plants, cubic spline synthesis technique has been used in online calculations of the core axial power distributions using ex-core detector signals once every 1 second in CPC. In CPC, pre-determined cubic spline function sets are used depending on the characteristics of the ex-core detector responses. But this method shows an unnegligible power distribution error for the extremely skewed axial shapes by using restrictive function sets. Therefore, this paper describes the cubic spline method for the synthesis of an axial power distribution and it generates several new cubic spline function sets for the application of the core protection system, especially for the severely distorted power shapes needed reactor type

  6. A Study on the Improvement of Digital Periapical Images using Image Interpolation Methods

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    Image resampling is of particular interest in digital radiology. When resampling an image to a new set of coordinate, there appears blocking artifacts and image changes. To enhance image quality, interpolation algorithms have been used. Resampling is used to increase the number of points in an image to improve its appearance for display. The process of interpolation is fitting a continuous function to the discrete points in the digital image. The purpose of this study was to determine the effects of the seven interpolation functions when image resampling in digital periapical images. The images were obtained by Digora, CDR and scanning of Ektaspeed plus periapical radiograms on the dry skull and human subject. The subjects were exposed to intraoral X-ray machine at 60 kVp and 70 kVp with exposure time varying between 0.01 and 0.50 second. To determine which interpolation method would provide the better image, seven functions were compared ; (1) nearest neighbor (2) linear (3) non-linear (4) facet model (5) cubic convolution (6) cubic spline (7) gray segment expansion. And resampled images were compared in terms of SNR (Signal to Noise Ratio) and MTF (Modulation Transfer Function) coefficient value. The obtained results were as follows ; 1. The highest SNR value (75.96 dB) was obtained with cubic convolution method and the lowest SNR value (72.44 dB) was obtained with facet model method among seven interpolation methods. 2. There were significant differences of SNR values among CDR, Digora and film scan (P 0.05). 4. There were significant differences of MTF coefficient values between linear interpolation method and the other six interpolation methods (P<0.05). 5. The speed of computation time was the fastest with nearest neighbor method and the slowest with non-linear method. 6. The better image was obtained with cubic convolution, cubic spline and gray segment method in ROC analysis. 7. The better sharpness of edge was obtained with gray segment expansion method

  7. Multivariate Hermite interpolation on scattered point sets using tensor-product expo-rational B-splines

    Science.gov (United States)

    Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter

    2011-12-01

    At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS

  8. A modified linear algebraic approach to electron scattering using cubic splines

    International Nuclear Information System (INIS)

    Kinney, R.A.

    1986-01-01

    A modified linear algebraic approach to the solution of the Schrodiner equation for low-energy electron scattering is presented. The method uses a piecewise cubic-spline approximation of the wavefunction. Results in the static-potential and the static-exchange approximations for e - +H s-wave scattering are compared with unmodified linear algebraic and variational linear algebraic methods. (author)

  9. Designing interactively with elastic splines

    DEFF Research Database (Denmark)

    Brander, David; Bærentzen, Jakob Andreas; Fisker, Ann-Sofie

    2018-01-01

    We present an algorithm for designing interactively with C1 elastic splines. The idea is to design the elastic spline using a C1 cubic polynomial spline where each polynomial segment is so close to satisfying the Euler-Lagrange equation for elastic curves that the visual difference becomes neglig...... negligible. Using a database of cubic Bézier curves we are able to interactively modify the cubic spline such that it remains visually close to an elastic spline....

  10. Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence

    KAUST Repository

    Ait-Haddou, Rachid; Barton, Michael; Calo, Victor M.

    2015-01-01

    We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention

  11. The high-level error bound for shifted surface spline interpolation

    OpenAIRE

    Luh, Lin-Tian

    2006-01-01

    Radial function interpolation of scattered data is a frequently used method for multivariate data fitting. One of the most frequently used radial functions is called shifted surface spline, introduced by Dyn, Levin and Rippa in \\cite{Dy1} for $R^{2}$. Then it's extended to $R^{n}$ for $n\\geq 1$. Many articles have studied its properties, as can be seen in \\cite{Bu,Du,Dy2,Po,Ri,Yo1,Yo2,Yo3,Yo4}. When dealing with this function, the most commonly used error bounds are the one raised by Wu and S...

  12. Research of Cubic Bezier Curve NC Interpolation Signal Generator

    Directory of Open Access Journals (Sweden)

    Shijun Ji

    2014-08-01

    Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.

  13. Numerical simulation of reaction-diffusion systems by modified cubic B-spline differential quadrature method

    International Nuclear Information System (INIS)

    Mittal, R.C.; Rohila, Rajni

    2016-01-01

    In this paper, we have applied modified cubic B-spline based differential quadrature method to get numerical solutions of one dimensional reaction-diffusion systems such as linear reaction-diffusion system, Brusselator system, Isothermal system and Gray-Scott system. The models represented by these systems have important applications in different areas of science and engineering. The most striking and interesting part of the work is the solution patterns obtained for Gray Scott model, reminiscent of which are often seen in nature. We have used cubic B-spline functions for space discretization to get a system of ordinary differential equations. This system of ODE’s is solved by highly stable SSP-RK43 method to get solution at the knots. The computed results are very accurate and shown to be better than those available in the literature. Method is easy and simple to apply and gives solutions with less computational efforts.

  14. An innovation on high-grade CNC machines tools for B-spline curve method of high-speed interpolation arithmetic

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    A novel on high-grade CNC machines tools for B Spline curve method of High-speed interpolation arithmetic is introduced. In the high-grade CNC machines tools CNC system existed the type value points is more trouble, the control precision is not strong and so on, In order to solve this problem. Through specific examples in matlab7.0 simulation result showed that that the interpolation error significantly reduced, the control precision is improved markedly, and satisfy the real-time interpolation of high speed, high accuracy requirements.

  15. Vibration Analysis of Rectangular Plates with One or More Guided Edges via Bicubic B-Spline Method

    Directory of Open Access Journals (Sweden)

    W.J. Si

    2005-01-01

    Full Text Available A simple and accurate method is proposed for the vibration analysis of rectangular plates with one or more guided edges, in which bicubic B-spline interpolation in combination with a new type of basis cubic B-spline functions is used to approximate the plate deflection. This type of basis cubic B-spline functions can satisfy simply supported, clamped, free, and guided edge conditions with easy numerical manipulation. The frequency characteristic equation is formulated based on classical thin plate theory by performing Hamilton's principle. The present solutions are verified with the analytical ones. Fast convergence, high accuracy and computational efficiency have been demonstrated from the comparisons. Frequency parameters for 13 cases of rectangular plates with at least one guided edge, which are possible by approximate or numerical methods only, are presented. These results are new in literature.

  16. Alignment of large image series using cubic B-splines tessellation: application to transmission electron microscopy data.

    Science.gov (United States)

    Dauguet, Julien; Bock, Davi; Reid, R Clay; Warfield, Simon K

    2007-01-01

    3D reconstruction from serial 2D microscopy images depends on non-linear alignment of serial sections. For some structures, such as the neuronal circuitry of the brain, very large images at very high resolution are necessary to permit reconstruction. These very large images prevent the direct use of classical registration methods. We propose in this work a method to deal with the non-linear alignment of arbitrarily large 2D images using the finite support properties of cubic B-splines. After initial affine alignment, each large image is split into a grid of smaller overlapping sub-images, which are individually registered using cubic B-splines transformations. Inside the overlapping regions between neighboring sub-images, the coefficients of the knots controlling the B-splines deformations are blended, to create a virtual large grid of knots for the whole image. The sub-images are resampled individually, using the new coefficients, and assembled together into a final large aligned image. We evaluated the method on a series of large transmission electron microscopy images and our results indicate significant improvements compared to both manual and affine alignment.

  17. Iteratively re-weighted bi-cubic spline representation of corneal topography and its comparison to the standard methods.

    Science.gov (United States)

    Zhu, Zhongxia; Janunts, Edgar; Eppig, Timo; Sauer, Tomas; Langenbucher, Achim

    2010-01-01

    The aim of this study is to represent the corneal anterior surface by utilizing radius and height data extracted from a TMS-2N topographic system with three different mathematical approaches and to simulate the visual performance. An iteratively re-weighted bi-cubic spline method is introduced for the local representation of the corneal surface. For comparison, two standard mathematical global representation approaches are used: the general quadratic function and the higher order Taylor polynomial approach. First, these methods were applied in simulations using three corneal models. Then, two real eye examples were investigated: one eye with regular astigmatism, and one eye which had undergone refractive surgery. A ray-tracing program was developed to evaluate the imaging performance of these examples with each surface representation strategy at the best focus plane. A 6 mm pupil size was chosen for the simulation. The fitting error (deviation) of the presented methods was compared. It was found that the accuracy of the topography representation was worst using the quadratic function and best with bicubic spline. The quadratic function cannot precisely describe the irregular corneal shape. In order to achieve a sub-micron fitting precision, the Taylor polynomial's order selection behaves adaptive to the corneal shape. The bi-cubic spline shows more stable performance. Considering the visual performance, the more precise the cornea representation is, the worse the visual performance is. The re-weighted bi-cubic spline method is a reasonable and stable method for representing the anterior corneal surface in measurements using a Placido-ring-pattern-based corneal topographer. Copyright © 2010. Published by Elsevier GmbH.

  18. Preprocessor with spline interpolation for converting stereolithography into cutter location source data

    Science.gov (United States)

    Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo

    2017-06-01

    The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.

  19. Cubic spline numerical solution of an ablation problem with convective backface cooling

    Science.gov (United States)

    Lin, S.; Wang, P.; Kahawita, R.

    1984-08-01

    An implicit numerical technique using cubic splines is presented for solving an ablation problem on a thin wall with convective cooling. A non-uniform computational mesh with 6 grid points has been used for the numerical integration. The method has been found to be computationally efficient, providing for the care under consideration of an overall error of about 1 percent. The results obtained indicate that the convective cooling is an important factor in reducing the ablation thickness.

  20. Image edges detection through B-Spline filters

    International Nuclear Information System (INIS)

    Mastropiero, D.G.

    1997-01-01

    B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab

  1. A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid

    Science.gov (United States)

    Sulaimanov, Z. M.; Shumilov, B. M.

    2017-10-01

    For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.

  2. Least square fitting of low resolution gamma ray spectra with cubic B-spline basis functions

    International Nuclear Information System (INIS)

    Zhu Menghua; Liu Lianggang; Qi Dongxu; You Zhong; Xu Aoao

    2009-01-01

    In this paper, the least square fitting method with the cubic B-spline basis functions is derived to reduce the influence of statistical fluctuations in the gamma ray spectra. The derived procedure is simple and automatic. The results show that this method is better than the convolution method with a sufficient reduction of statistical fluctuation. (authors)

  3. Explicit Gaussian quadrature rules for C^1 cubic splines with symmetrically stretched knot sequence

    KAUST Repository

    Ait-Haddou, Rachid

    2015-06-19

    We provide explicit expressions for quadrature rules on the space of C^1 cubic splines with non-uniform, symmetrically stretched knot sequences. The quadrature nodes and weights are derived via an explicit recursion that avoids an intervention of any numerical solver and the rule is optimal, that is, it requires minimal number of nodes. Numerical experiments validating the theoretical results and the error estimates of the quadrature rules are also presented.

  4. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Crecy, F de

    1994-12-31

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs.

  5. Pseudo-cubic thin-plate type Spline method for analyzing experimental data

    International Nuclear Information System (INIS)

    Crecy, F. de.

    1993-01-01

    A mathematical tool, using pseudo-cubic thin-plate type Spline, has been developed for analysis of experimental data points. The main purpose is to obtain, without any a priori given model, a mathematical predictor with related uncertainties, usable at any point in the multidimensional parameter space. The smoothing parameter is determined by a generalized cross validation method. The residual standard deviation obtained is significantly smaller than that of a least square regression. An example of use is given with critical heat flux data, showing a significant decrease of the conception criterion (minimum allowable value of the DNB ratio). (author) 4 figs., 1 tab., 7 refs

  6. A Novel Approach of Cardiac Segmentation In CT Image Based On Spline Interpolation

    International Nuclear Information System (INIS)

    Gao Yuan; Ma Pengcheng

    2011-01-01

    Organ segmentation in CT images is the basis of organ model reconstruction, thus precisely detecting and extracting the organ boundary are keys for reconstruction. In CT image the cardiac are often adjacent to the surrounding tissues and gray gradient between them is too slight, which cause the difficulty of applying classical segmentation method. We proposed a novel algorithm for cardiac segmentation in CT images in this paper, which combines the gray gradient methods and the B-spline interpolation. This algorithm can perfectly detect the boundaries of cardiac, at the same time it could well keep the timeliness because of the automatic processing.

  7. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  8. Series-NonUniform Rational B-Spline (S-NURBS) model: a geometrical interpolation framework for chaotic data.

    Science.gov (United States)

    Shao, Chenxi; Liu, Qingqing; Wang, Tingting; Yin, Peifeng; Wang, Binghong

    2013-09-01

    Time series is widely exploited to study the innate character of the complex chaotic system. Existing chaotic models are weak in modeling accuracy because of adopting either error minimization strategy or an acceptable error to end the modeling process. Instead, interpolation can be very useful for solving differential equations with a small modeling error, but it is also very difficult to deal with arbitrary-dimensional series. In this paper, geometric theory is considered to reduce the modeling error, and a high-precision framework called Series-NonUniform Rational B-Spline (S-NURBS) model is developed to deal with arbitrary-dimensional series. The capability of the interpolation framework is proved in the validation part. Besides, we verify its reliability by interpolating Musa dataset. The main improvement of the proposed framework is that we are able to reduce the interpolation error by properly adjusting weights series step by step if more information is given. Meanwhile, these experiments also demonstrate that studying the physical system from a geometric perspective is feasible.

  9. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  10. The Norwegian Healthier Goats program--modeling lactation curves using a multilevel cubic spline regression model.

    Science.gov (United States)

    Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S

    2014-07-01

    In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield. Copyright © 2014

  11. Synthesis of freeform refractive surfaces forming various radiation patterns using interpolation

    Science.gov (United States)

    Voznesenskaya, Anna; Mazur, Iana; Krizskiy, Pavel

    2017-09-01

    Optical freeform surfaces are very popular today in such fields as lighting systems, sensors, photovoltaic concentrators, and others. The application of such surfaces allows to obtain systems with a new quality with a reduced number of optical components to ensure high consumer characteristics: small size, weight, high optical transmittance. This article presents the methods of synthesis of refractive surface for a given source and the radiation pattern of various shapes using a computer simulation cubic spline interpolation.

  12. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    Science.gov (United States)

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  13. Numerical solution of the Black-Scholes equation using cubic spline wavelets

    Science.gov (United States)

    Černá, Dana

    2016-12-01

    The Black-Scholes equation is used in financial mathematics for computation of market values of options at a given time. We use the θ-scheme for time discretization and an adaptive scheme based on wavelets for discretization on the given time level. Advantages of the proposed method are small number of degrees of freedom, high-order accuracy with respect to variables representing prices and relatively small number of iterations needed to resolve the problem with a desired accuracy. We use several cubic spline wavelet and multi-wavelet bases and discuss their advantages and disadvantages. We also compare an isotropic and anisotropic approach. Numerical experiments are presented for the two-dimensional Black-Scholes equation.

  14. Plasma simulation with the Differential Algebraic Cubic Interpolated Propagation scheme

    Energy Technology Data Exchange (ETDEWEB)

    Utsumi, Takayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    A computer code based on the Differential Algebraic Cubic Interpolated Propagation scheme has been developed for the numerical solution of the Boltzmann equation for a one-dimensional plasma with immobile ions. The scheme advects the distribution function and its first derivatives in the phase space for one time step by using a numerical integration method for ordinary differential equations, and reconstructs the profile in phase space by using a cubic polynomial within a grid cell. The method gives stable and accurate results, and is efficient. It is successfully applied to a number of equations; the Vlasov equation, the Boltzmann equation with the Fokker-Planck or the Bhatnagar-Gross-Krook (BGK) collision term and the relativistic Vlasov equation. The method can be generalized in a straightforward way to treat cases such as problems with nonperiodic boundary conditions and higher dimensional problems. (author)

  15. Point based interactive image segmentation using multiquadrics splines

    Science.gov (United States)

    Meena, Sachin; Duraisamy, Prakash; Palniappan, Kannappan; Seetharaman, Guna

    2017-05-01

    Multiquadrics (MQ) are radial basis spline function that can provide an efficient interpolation of data points located in a high dimensional space. MQ were developed by Hardy to approximate geographical surfaces and terrain modelling. In this paper we frame the task of interactive image segmentation as a semi-supervised interpolation where an interpolating function learned from the user provided seed points is used to predict the labels of unlabeled pixel and the spline function used in the semi-supervised interpolation is MQ. This semi-supervised interpolation framework has a nice closed form solution which along with the fact that MQ is a radial basis spline function lead to a very fast interactive image segmentation process. Quantitative and qualitative results on the standard datasets show that MQ outperforms other regression based methods, GEBS, Ridge Regression and Logistic Regression, and popular methods like Graph Cut,4 Random Walk and Random Forest.6

  16. Spatial and temporal interpolation of satellite-based aerosol optical depth measurements over North America using B-splines

    Science.gov (United States)

    Pfister, Nicolas; O'Neill, Norman T.; Aube, Martin; Nguyen, Minh-Nghia; Bechamp-Laganiere, Xavier; Besnier, Albert; Corriveau, Louis; Gasse, Geremie; Levert, Etienne; Plante, Danick

    2005-08-01

    Satellite-based measurements of aerosol optical depth (AOD) over land are obtained from an inversion procedure applied to dense dark vegetation pixels of remotely sensed images. The limited number of pixels over which the inversion procedure can be applied leaves many areas with little or no AOD data. Moreover, satellite coverage by sensors such as MODIS yields only daily images of a given region with four sequential overpasses required to straddle mid-latitude North America. Ground based AOD data from AERONET sun photometers are available on a more continuous basis but only at approximately fifty locations throughout North America. The object of this work is to produce a complete and coherent mapping of AOD over North America with a spatial resolution of 0.1 degree and a frequency of three hours by interpolating MODIS satellite-based data together with available AERONET ground based measurements. Before being interpolated, the MODIS AOD data extracted from different passes are synchronized to the mapping time using analyzed wind fields from the Global Multiscale Model (Meteorological Service of Canada). This approach amounts to a trajectory type of simplified atmospheric dynamics correction method. The spatial interpolation is performed using a weighted least squares method applied to bicubic B-spline functions defined on a rectangular grid. The least squares method enables one to weight the data accordingly to the measurement errors while the B-splines properties of local support and C2 continuity offer a good approximation of AOD behaviour viewed as a function of time and space.

  17. Gamma Splines and Wavelets

    Directory of Open Access Journals (Sweden)

    Hannu Olkkonen

    2013-01-01

    Full Text Available In this work we introduce a new family of splines termed as gamma splines for continuous signal approximation and multiresolution analysis. The gamma splines are born by -times convolution of the exponential by itself. We study the properties of the discrete gamma splines in signal interpolation and approximation. We prove that the gamma splines obey the two-scale equation based on the polyphase decomposition. to introduce the shift invariant gamma spline wavelet transform for tree structured subscale analysis of asymmetric signal waveforms and for systems with asymmetric impulse response. Especially we consider the applications in biomedical signal analysis (EEG, ECG, and EMG. Finally, we discuss the suitability of the gamma spline signal processing in embedded VLSI environment.

  18. Color management with a hammer: the B-spline fitter

    Science.gov (United States)

    Bell, Ian E.; Liu, Bonny H. P.

    2003-01-01

    To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.

  19. Cubic scaling algorithms for RPA correlation using interpolative separable density fitting

    Science.gov (United States)

    Lu, Jianfeng; Thicke, Kyle

    2017-12-01

    We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.

  20. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration.

    Science.gov (United States)

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  1. Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    Directory of Open Access Journals (Sweden)

    Amir Pasha Mahmoudzadeh

    2013-01-01

    Full Text Available Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order and to compare the effect of cost functions (least squares (LS, normalized mutual information (NMI, normalized cross correlation (NCC, and correlation ratio (CR for optimized automatic image registration (OAIR on 3D spoiled gradient recalled (SPGR magnetic resonance images (MRI of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  2. A cubic B-spline Galerkin approach for the numerical simulation of the GEW equation

    Directory of Open Access Journals (Sweden)

    S. Battal Gazi Karakoç

    2016-02-01

    Full Text Available The generalized equal width (GEW wave equation is solved numerically by using lumped Galerkin approach with cubic B-spline functions. The proposed numerical scheme is tested by applying two test problems including single solitary wave and interaction of two solitary waves. In order to determine the performance of the algorithm, the error norms L2 and L∞ and the invariants I1, I2 and I3 are calculated. For the linear stability analysis of the numerical algorithm, von Neumann approach is used. As a result, the obtained findings show that the presented numerical scheme is preferable to some recent numerical methods.  

  3. B-splines and Faddeev equations

    International Nuclear Information System (INIS)

    Huizing, A.J.

    1990-01-01

    Two numerical methods for solving the three-body equations describing relativistic pion deuteron scattering have been investigated. For separable two body interactions these equations form a set of coupled one-dimensional integral equations. They are plagued by singularities which occur in the kernel of the integral equations as well as in the solution. The methods to solve these equations differ in the way they treat the singularities. First the Fuda-Stuivenberg method is discussed. The basic idea of this method is an one time iteration of the set of integral equations to treat the logarithmic singularities. In the second method, the spline method, the unknown solution is approximated by splines. Cubic splines have been used with cubic B-splines as basis. If the solution is approximated by a linear combination of basis functions, an integral equation can be transformed into a set of linear equations for the expansion coefficients. This set of linear equations is solved by standard means. Splines are determined by points called knots. A proper choice of splines to approach the solution stands for a proper choice of the knots. The solution of the three-body scattering equations has a square root behaviour at a certain point. Hence it was investigated how the knots should be chosen to approximate the square root function by cubic B-splines in an optimal way. Before applying this method to solve numerically the three-body equations describing pion-deuteron scattering, an analytically solvable example has been constructed with a singularity structure of both kernel and solution comparable to those of the three-body equations. The accuracy of the numerical solution was determined to a large extent by the accuracy of the approximation of the square root part. The results for a pion laboratory energy of 47.4 MeV agree very well with those from literature. In a complete calculation for 47.7 MeV the spline method turned out to be a factor thousand faster than the Fuda

  4. ERRORS MEASUREMENT OF INTERPOLATION METHODS FOR GEOID MODELS: STUDY CASE IN THE BRAZILIAN REGION

    Directory of Open Access Journals (Sweden)

    Daniel Arana

    Full Text Available Abstract: The geoid is an equipotential surface regarded as the altimetric reference for geodetic surveys and it therefore, has several practical applications for engineers. In recent decades the geodetic community has concentrated efforts on the development of highly accurate geoid models through modern techniques. These models are supplied through regular grids which users need to make interpolations. Yet, little information can be obtained regarding the most appropriate interpolation method to extract information from the regular grid of geoidal models. The use of an interpolator that does not represent the geoid surface appropriately can impair the quality of geoid undulations and consequently the height transformation. This work aims to quantify the magnitude of error that comes from a regular mesh of geoid models. The analysis consisted of performing a comparison between the interpolation of the MAPGEO2015 program and three interpolation methods: bilinear, cubic spline and neural networks Radial Basis Function. As a result of the experiments, it was concluded that 2.5 cm of the 18 cm error of the MAPGEO2015 validation is caused by the use of interpolations in the 5'x5' grid.

  5. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    Science.gov (United States)

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  6. Comparison of CHF predictors obtained by the pseudo-cubic spline method from CEA's FLICA-3M and EDF's THYC computations

    International Nuclear Information System (INIS)

    Banner, D.; Crecy, F. de.

    1993-07-01

    Comparison of CHF predictors has been performed by using the same Critical Heat Flux (CHF) databases but by deriving local thermal-hydraulic conditions from different codes (THYC-V3 and FLICA-3M). Predictions have been obtained by the pseudo-cubic Spline method (PCSM). It is shown that the two codes yield similar results provided that equivalent turbulent mixing coefficients are chosen. (author), 6 figs., 5 refs

  7. Grade Distribution Modeling within the Bauxite Seams of the Wachangping Mine, China, Using a Multi-Step Interpolation Algorithm

    Directory of Open Access Journals (Sweden)

    Shaofeng Wang

    2017-05-01

    Full Text Available Mineral reserve estimation and mining design depend on a precise modeling of the mineralized deposit. A multi-step interpolation algorithm, including 1D biharmonic spline estimator for interpolating floor altitudes, 2D nearest neighbor, linear, natural neighbor, cubic, biharmonic spline, inverse distance weighted, simple kriging, and ordinary kriging interpolations for grade distribution on the two vertical sections at roadways, and 3D linear interpolation for grade distribution between sections, was proposed to build a 3D grade distribution model of the mineralized seam in a longwall mining panel with a U-shaped layout having two roadways at both sides. Compared to field data from exploratory boreholes, this multi-step interpolation using a natural neighbor method shows an optimal stability and a minimal difference between interpolation and field data. Using this method, the 97,576 m3 of bauxite, in which the mass fraction of Al2O3 (Wa and the mass ratio of Al2O3 to SiO2 (Wa/s are 61.68% and 27.72, respectively, was delimited from the 189,260 m3 mineralized deposit in the 1102 longwall mining panel in the Wachangping mine, Southwest China. The mean absolute errors, the root mean squared errors and the relative standard deviations of errors between interpolated data and exploratory grade data at six boreholes are 2.544, 2.674, and 32.37% of Wa; and 1.761, 1.974, and 67.37% of Wa/s, respectively. The proposed method can be used for characterizing the grade distribution in a mineralized seam between two roadways at both sides of a longwall mining panel.

  8. Estimation of missing values in solar radiation data using piecewise interpolation methods: Case study at Penang city

    International Nuclear Information System (INIS)

    Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu

    2015-01-01

    The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique

  9. Estimation of missing values in solar radiation data using piecewise interpolation methods: Case study at Penang city

    Science.gov (United States)

    Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu

    2015-12-01

    The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device's development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.

  10. Estimation of missing values in solar radiation data using piecewise interpolation methods: Case study at Penang city

    Energy Technology Data Exchange (ETDEWEB)

    Zainudin, Mohd Lutfi, E-mail: mdlutfi07@gmail.com [School of Quantitative Sciences, UUMCAS, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia); Institut Matematik Kejuruteraan (IMK), Universiti Malaysia Perlis, 02600 Arau, Perlis (Malaysia); Saaban, Azizan, E-mail: azizan.s@uum.edu.my [School of Quantitative Sciences, UUMCAS, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia); Bakar, Mohd Nazari Abu, E-mail: mohdnazari@perlis.uitm.edu.my [Faculty of Applied Science, Universiti Teknologi Mara, 02600 Arau, Perlis (Malaysia)

    2015-12-11

    The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.

  11. Fuzzy linguistic model for interpolation

    International Nuclear Information System (INIS)

    Abbasbandy, S.; Adabitabar Firozja, M.

    2007-01-01

    In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method

  12. Interpolation of fuzzy data | Khodaparast | Journal of Fundamental ...

    African Journals Online (AJOL)

    Considering the many applications of mathematical functions in different ways, it is essential to have a defining function. In this study, we used Fuzzy Lagrangian interpolation and natural fuzzy spline polynomials to interpolate the fuzzy data. In the current world and in the field of science and technology, interpolation issues ...

  13. Spline approximation, Part 1: Basic methodology

    Science.gov (United States)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  14. Material-Point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2007-01-01

    The aim of this paper is to test different types of spatial interpolation for the material-point method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  15. Material-point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  16. Digital elevation model production from scanned topographic contour maps via thin plate spline interpolation

    International Nuclear Information System (INIS)

    Soycan, Arzu; Soycan, Metin

    2009-01-01

    GIS (Geographical Information System) is one of the most striking innovation for mapping applications supplied by the developing computer and software technology to users. GIS is a very effective tool which can show visually combination of the geographical and non-geographical data by recording these to allow interpretations and analysis. DEM (Digital Elevation Model) is an inalienable component of the GIS. The existing TM (Topographic Map) can be used as the main data source for generating DEM by amanual digitizing or vectorization process for the contours polylines. The aim of this study is to examine the DEM accuracies, which were obtained by TMs, as depending on the number of sampling points and grid size. For these purposes, the contours of the several 1/1000 scaled scanned topographical maps were vectorized. The different DEMs of relevant area have been created by using several datasets with different numbers of sampling points. We focused on the DEM creation from contour lines using gridding with RBF (Radial Basis Function) interpolation techniques, namely TPS as the surface fitting model. The solution algorithm and a short review of the mathematical model of TPS (Thin Plate Spline) interpolation techniques are given. In the test study, results of the application and the obtained accuracies are drawn and discussed. The initial object of this research is to discuss the requirement of DEM in GIS, urban planning, surveying engineering and the other applications with high accuracy (a few deci meters). (author)

  17. Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases

    Science.gov (United States)

    Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre

    2011-12-01

    Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.

  18. SU-F-T-315: Comparative Studies of Planar Dose with Different Spatial Resolution for Head and Neck IMRT QA

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, T; Koo, T [Hallym University Medical Center, Chuncheon, Gangwon (Korea, Republic of)

    2016-06-15

    Purpose: To quantitatively investigate the planar dose difference and the γ value between the reference fluence map with the 1 mm detector-to-detector distance and the other fluence maps with less spatial resolution for head and neck intensity modulated radiation (IMRT) therapy. Methods: For ten head and neck cancer patients, the IMRT quality assurance (QA) beams were generated using by the commercial radiation treatment planning system, Pinnacle3 (ver. 8.0.d Philips Medical System, Madison, WI). For each beam, ten fluence maps (detector-to-detector distance: 1 mm to 10 mm by 1 mm) were generated. The fluence maps with larger than 1 mm detector-todetector distance were interpolated using MATLAB (R2014a, the Math Works,Natick, MA) by four different interpolation Methods: for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. These interpolated fluence maps were compared with the reference one using the γ value (criteria: 3%, 3 mm) and the relative dose difference. Results: As the detector-to-detector distance increases, the dose difference between the two maps increases. For the fluence map with the same resolution, the cubic spline interpolation and the bicubic interpolation are almost equally best interpolation methods while the nearest neighbor interpolation is the worst.For example, for 5 mm distance fluence maps, γ≤1 are 98.12±2.28%, 99.48±0.66%, 99.45±0.65% and 82.23±0.48% for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. For 7 mm distance fluence maps, γ≤1 are 90.87±5.91%, 90.22±6.95%, 91.79±5.97% and 71.93±4.92 for the bilinear, the cubic spline, the bicubic, and the nearest neighbor interpolation, respectively. Conclusion: We recommend that the 2-dimensional detector array with high spatial resolution should be used as an IMRT QA tool and that the measured fluence maps should be interpolated using by the cubic spline interpolation or the

  19. Estimation of total cross-sections for neutron and proton formation in interactions between deuterons and 7Li nuclei

    International Nuclear Information System (INIS)

    Zbenigorodskiy, A.G.; Guzhovskij, B.Ya.; Abramovich, S.N.; Zherebtsov, V.A.; Pelipenko, O.A.

    1988-12-01

    Cubic spline approximation curves were obtained from experimental data. A brief description is given of an evaluation method using spline-functions with due regard for systematic and accidental errors. A method is suggested for representing the curve thus obtained in the form of a table of cubic spline coefficients which are suitable for interpolation calculations. (author). 2 figs, 2 tabs

  20. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    Science.gov (United States)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  1. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  2. A new class of interpolatory $L$-splines with adjoint end conditions

    OpenAIRE

    Bejancu, Aurelian; Al-Sahli, Reyouf S.

    2014-01-01

    A thin plate spline surface for interpolation of smooth transfinite data prescribed along concentric circles was recently proposed by Bejancu, using Kounchev's polyspline method. The construction of the new `Beppo Levi polyspline' surface reduces, via separation of variables, to that of a countable family of univariate $L$-splines, indexed by the frequency integer $k$. This paper establishes the existence, uniqueness and variational properties of the `Beppo Levi $L$-spline' schemes correspond...

  3. Analytic regularization of uniform cubic B-spline deformation fields.

    Science.gov (United States)

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  4. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  5. A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation

    Directory of Open Access Journals (Sweden)

    Mingzhu Li

    2014-01-01

    Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.

  6. Space cutter compensation method for five-axis nonuniform rational basis spline machining

    Directory of Open Access Journals (Sweden)

    Yanyu Ding

    2015-07-01

    Full Text Available In view of the good machining performance of traditional three-axis nonuniform rational basis spline interpolation and the space cutter compensation issue in multi-axis machining, this article presents a triple nonuniform rational basis spline five-axis interpolation method, which uses three nonuniform rational basis spline curves to describe cutter center location, cutter axis vector, and cutter contact point trajectory, respectively. The relative position of the cutter and workpiece is calculated under the workpiece coordinate system, and the cutter machining trajectory can be described precisely and smoothly using this method. The three nonuniform rational basis spline curves are transformed into a 12-dimentional Bézier curve to carry out discretization during the discrete process. With the cutter contact point trajectory as the precision control condition, the discretization is fast. As for different cutters and corners, the complete description method of space cutter compensation vector is presented in this article. Finally, the five-axis nonuniform rational basis spline machining method is further verified in a two-turntable five-axis machine.

  7. Usando splines cúbicas na modelagem matemática da evolução populacional de Pirapora/MG

    Directory of Open Access Journals (Sweden)

    José Sérgio Domingues

    2014-08-01

    Full Text Available O objetivo desse trabalho é obter um modelo matemático para a evolução populacional da cidade de Pirapora/MG, baseando-se apenas nos dados de censos e contagens populacionais do Instituto Brasileiro de Geografia e Estatística (IBGE. Para isso, é utilizada a interpolação por splines cúbicas, pois as técnicas de interpolação linear e polinomial, e também o modelo logístico, não se ajustam bem a essa população. Os dados analisados não são equidistantes, então, utiliza-se como amostra anos separados com passo h de 10 anos. Os valores descartados inicialmente e as estimativas populacionais para esse município, descritos pela Fundação João Pinheiro, serviram para validação do modelo construído, e para a estimativa das diferenças percentuais de previsão, que não ultrapassaram os 2,21%. Ao se considerar que o padrão de evolução populacional de 2000 a 2010 se manterá até 2020, estima-se as populações da cidade de 2011 a 2020, cuja diferença percentual média foi de apenas 0,49%. Conclui-se que o modelo se ajusta muito bem aos dados, e que estimativas populacionais em qualquer ano de 1970 e 2020 são confiáveis. Além disso, o modelo permite a visualização prática de uma aplicação dessa técnica na modelagem populacional, e, portanto, também pode ser utilizada para fins didáticos.Palavras-chave: Splines cúbicas. Interpolação. Modelagem matemática. Evolução populacional. Pirapora.Using cubic splines on mathematical modeling of the population evolution of Pirapora/MGThe main objective of this paper is to obtain a mathematical model for the evolution of the population in Pirapora/MG, based only on data from censuses and population counts from Brazilian Institute of Geography and Statistics (IBGE. For this, the cubic spline interpolation is used because the technique of linear and polynomial interpolation, and also the logistic model do not fit well with this population. The analyzed data are not equidistant

  8. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    Science.gov (United States)

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  9. ORBXYZ: a 3D single-particle orbit code for following charged-particle trajectories in equilibrium magnetic fields

    International Nuclear Information System (INIS)

    Anderson, D.V.; Cohen, R.H.; Ferguson, J.R.; Johnston, B.M.; Sharp, C.B.; Willmann, P.A.

    1981-01-01

    The single particle orbit code, TIBRO, has been modified extensively to improve the interpolation methods used and to allow use of vector potential fields in the simulation of charged particle orbits on a 3D domain. A 3D cubic B-spline algorithm is used to generate spline coefficients used in the interpolation. Smooth and accurate field representations are obtained. When vector potential fields are used, the 3D cubic spline interpolation formula analytically generates the magnetic field used to push the particles. This field has del.BETA = 0 to computer roundoff. When magnetic induction is used the interpolation allows del.BETA does not equal 0, which can lead to significant nonphysical results. Presently the code assumes quadrupole symmetry, but this is not an essential feature of the code and could be easily removed for other applications. Many details pertaining to this code are given on microfiche accompanying this report

  10. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel

    DEFF Research Database (Denmark)

    Pai, Akshay Sadananda Uppinakudru; Sommer, Stefan Horst; Sørensen, Lauge

    by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only...... approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed...... that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p

  11. Landmark-based elastic registration using approximating thin-plate splines.

    Science.gov (United States)

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  12. Higher order multipoles and splines in plasma simulations

    International Nuclear Information System (INIS)

    Okuda, H.; Cheng, C.Z.

    1978-01-01

    The reduction of spatial grid effects in plasma simulations has been studied numerically using higher order multipole expansions and the spline method in one dimension. It is found that, while keeping the higher order moments such as quadrupole and octopole moments substantially reduces the grid effects, quadratic and cubic splines in general have better stability properties for numerical plasma simulations when the Debye length is much smaller than the grid size. In particular the spline method may be useful in three-dimensional simulations for plasma confinement where the grid size in the axial direction is much greater than the Debye length. (Auth.)

  13. Higher-order multipoles and splines in plasma simulations

    International Nuclear Information System (INIS)

    Okuda, H.; Cheng, C.Z.

    1977-12-01

    Reduction of spatial grid effects in plasma simulations has been studied numerically using higher order multipole expansions and spline method in one dimension. It is found that, while keeping the higher order moments such as quadrupole and octopole moments substantially reduces the grid effects, quadratic and cubic splines in general have better stability properties for numerical plasma simulations when the Debye length is much smaller than the grid size. In particular, spline method may be useful in three dimensional simulations for plasma confinement where the grid size in the axial direction is much greater than the Debye length

  14. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Pohjola, J.; Turunen, J.; Lipping, T.

    2009-07-01

    In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

  15. Effect of interpolation on parameters extracted from seating interface pressure arrays

    OpenAIRE

    Michael Wininger, PhD; Barbara Crane, PhD, PT

    2015-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pre...

  16. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    Science.gov (United States)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  17. Deconvolution using thin-plate splines

    International Nuclear Information System (INIS)

    Toussaint, Udo v.; Gori, Silvio

    2007-01-01

    The ubiquitous problem of estimating 2-dimensional profile information from a set of line integrated measurements is tackled with Bayesian probability theory by exploiting prior information about local smoothness. For this purpose thin-plate-splines (the 2-D minimal curvature analogue of cubic-splines in 1-D) are employed. The optimal number of support points required for inversion of 2-D tomographic problems is determined using model comparison. Properties of this approach are discussed and the question of suitable priors is addressed. Finally, we illustrated the properties of this approach with 2-D inversion results using data from line-integrated measurements from fusion experiments

  18. Nonlinear Analysis for the Crack Control of SMA Smart Concrete Beam Based on a Bidirectional B-Spline QR Method

    Directory of Open Access Journals (Sweden)

    Yan Li

    2018-01-01

    Full Text Available A bidirectional B-spline QR method (BB-sQRM for the study on the crack control of the reinforced concrete (RC beam embedded with shape memory alloy (SMA wires is presented. In the proposed method, the discretization is performed with a set of spline nodes in two directions of the plane model, and structural displacement fields are constructed by the linear combination of the products of cubic B-spline interpolation functions. To derive the elastoplastic stiffness equation of the RC beam, an explicit form is utilized to express the elastoplastic constitutive law of concrete materials. The proposed model is compared with the ANSYS model in several numerical examples. The results not only show that the solutions given by the BB-sQRM are very close to those given by the finite element method (FEM but also prove the high efficiency and low computational cost of the BB-sQRM. Meanwhile, the five parameters, such as depth-span ratio, thickness of concrete cover, reinforcement ratio, prestrain, and eccentricity of SMA wires, are investigated to learn their effects on the crack control. The results show that depth-span ratio of RC beams and prestrain and eccentricity of SMA wires have a significant influence on the control performance of beam cracks.

  19. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    International Nuclear Information System (INIS)

    Hardy, David J.; Schulten, Klaus; Wolff, Matthew A.; Skeel, Robert D.; Xia, Jianlin

    2016-01-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.

  20. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    Science.gov (United States)

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for

  1. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  2. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

    Science.gov (United States)

    Chen, Liangji; Guo, Guangsong; Li, Huiying

    2017-07-01

    NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

  3. Monotone piecewise bicubic interpolation

    International Nuclear Information System (INIS)

    Carlson, R.E.; Fritsch, F.N.

    1985-01-01

    In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables

  4. A study on neural network representation of reactor power control procedures

    International Nuclear Information System (INIS)

    Moon, Byung Soo; Park, J. C.; Kim, Y. T.; Yang, S. U.; Lee, H. C.; Hwang, I. A.; Hwang, H. S.

    1997-12-01

    A neural algorithm to carry out the curve readings and arithmetic computations necessary for reactor power control is described in this report. The curve readings are for functions of the form z=f(x,y) and require fairly good interpolations. One of the functions is the total power defect as a function of reactor power and boron concentration. The second is the new position of control rod as a function of the current rod position and the increment of total power defect needed for the required power change. The curves involving xenon effect are also considered separately. We represented these curves by cubic spline interpolations first and then converted them to fuzzy systems so that they perform the identical interpolations as the splines. The resulting fuzzy systems are then converted to artificial neural networks similar to the RBF type neural network. These networks still carry the O(h'4) accuracy as the cubic spline interpolating functions. Also included is a description of an important result on how to find the spline interpolation coefficients without solving the matrix equation, when the function is a polynomial of the form f(t)=t'm. This result provides a systematic way of presenting continuous functions by fuzzy systems and hence by artificial neural networks without any training. (author). 10 refs., 2 tabs., 10 figs

  5. Interface information transfer between non-matching, nonconforming interfaces using radial basis function interpolation

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2016-10-01

    Full Text Available words, gB = [ φBA PB ] [ MAA PA P TA 0 ]−1 [ gA 0 ] . (15) NAME: DEFINITION C0 compactly supported piecewise polynomial (C0): (1− (||x|| /r))2+ C2 compactly supported piecewise polynomial (C2): (1− (||x|| /r))4+ (4 (||x|| /r) + 1) Thin-plate spline (TPS... a numerical comparison to Kriging and the moving least-squares method, see Krishnamurthy [16]). RBF interpolation is based on fitting a series of splines, or basis functions to interpolate information from one point cloud to another. Let us assume we...

  6. Program package for determining the boundary of NPP stability region in the space of reactivity coefficients

    International Nuclear Information System (INIS)

    Znyshev, V.V.; Nikolaev, M.Ya.; Novikova, L.V.; Sinyavskij, V.V.

    1987-01-01

    The GOUKOR program package (FORTRAN, BESM-6 computer), allowing one to calculate stability region boundary in the space of NPP reactivity coefficients, is developed. Transfer functions, which may be obtained experimentally or by calculating the NPP mathematical model under low perturbations, when the model nonlineary effect becomes disregardingly low, are necessary for the package operation. Transfer functions are assigned at several points and in the package they are interpolated either in piecewise manner or by cubical splines. Evaluation of the error effect in the transfer function representation on the transmission function calculation error is performed. It is shown, that transfer function interpolation by cubical splines as compared to the piecewise interpolation allows one to reduce the number of points, assigning the transfer functions without reducing the transmission function calculation accuracy

  7. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    Science.gov (United States)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  8. The semi-Lagrangian method on curvilinear grids

    Directory of Open Access Journals (Sweden)

    Hamiaz Adnane

    2016-09-01

    Full Text Available We study the semi-Lagrangian method on curvilinear grids. The classical backward semi-Lagrangian method [1] preserves constant states but is not mass conservative. Natural reconstruction of the field permits nevertheless to have at least first order in time conservation of mass, even if the spatial error is large. Interpolation is performed with classical cubic splines and also cubic Hermite interpolation with arbitrary reconstruction order of the derivatives. High odd order reconstruction of the derivatives is shown to be a good ersatz of cubic splines which do not behave very well as time step tends to zero. A conservative semi-Lagrangian scheme along the lines of [2] is then described; here conservation of mass is automatically satisfied and constant states are shown to be preserved up to first order in time.

  9. Thin-plate spline quadrature of geodetic integrals

    Science.gov (United States)

    Vangysen, Herman

    1989-01-01

    Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.

  10. Numerical solutions of magnetohydrodynamic stability of axisymmetric toroidal plasmas using cubic B-spline finite element method

    International Nuclear Information System (INIS)

    Cheng, C.Z.

    1988-12-01

    A nonvariational ideal MHD stability code (NOVA) has been developed. In a general flux coordinate (/psi/, θ, /zeta/) system with an arbitrary Jacobian, the NOVA code employs Fourier expansions in the generalized poloidal angle θ and generalized toroidal angle /zeta/ directions, and cubic-B spline finite elements in the radial /psi/ direction. Extensive comparisons with these variational ideal MHD codes show that the NOVA code converges faster and gives more accurate results. An extended version of NOVA is developed to integrate non-Hermitian eigenmode equations due to energetic particles. The set of non-Hermitian integro-differential eigenmode equations is numerically solved by the NOVA-K code. We have studied the problems of the stabilization of ideal MHD internal kink modes by hot particle pressure and the excitation of ''fishbone'' internal kink modes by resonating with the energetic particle magnetic drift frequency. Comparisons with analytical solutions show that the values of the critical β/sub h/ from the analytical theory can be an order of magnitude different from those computed by the NOVA-K code. 24 refs., 11 figs., 1 tab

  11. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Ince Serdar

    2008-01-01

    Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  12. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Janusz Konrad

    2009-01-01

    Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  13. SPLINE-FUNCTIONS IN THE TASK OF THE FLOW AIRFOIL PROFILE

    Directory of Open Access Journals (Sweden)

    Mikhail Lopatjuk

    2013-12-01

    Full Text Available The method and the algorithm of solving the problem of streamlining are presented. Neumann boundary problem is reduced to the solution of integral equations with given boundary conditions using the cubic spline-functions

  14. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    Science.gov (United States)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  15. Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.

    Science.gov (United States)

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2013-03-01

    This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.

  16. MHD stability analysis using higher order spline functions

    Energy Technology Data Exchange (ETDEWEB)

    Ida, Akihiro [Department of Energy Engineering and Science, Graduate School of Engineering, Nagoya University, Nagoya, Aichi (Japan); Todoroki, Jiro; Sanuki, Heiji

    1999-04-01

    The eigenvalue problem of the linearized magnetohydrodynamic (MHD) equation is formulated by using higher order spline functions as the base functions of Ritz-Galerkin approximation. When the displacement vector normal to the magnetic surface (in the magnetic surface) is interpolated by B-spline functions of degree p{sub 1} (degree p{sub 2}), which is continuously c{sub 1}-th (c{sub 2}-th) differentiable on neighboring finite elements, the sufficient conditions for the good approximation is given by p{sub 1}{>=}p{sub 2}+1, c{sub 1}{<=}c{sub 2}+1, (c{sub 1}{>=}1, p{sub 2}{>=}c{sub 2}{>=}0). The influence of the numerical integration upon the convergence of calculated eigenvalues is discussed. (author)

  17. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  18. Trajectory control of an articulated robot with a parallel drive arm based on splines under tension

    Science.gov (United States)

    Yi, Seung-Jong

    Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and

  19. Geometric modelling of channel present in reservoir petroleum using Bezier splines; Modelagem da geometria de paleocanais presentes em reservatorios petroliferos usando splines de Bezier

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Carlos Eduardo S. [Universidade Federal de Campina Grande, PB (Brazil). Programa de Recursos Humanos 25 da ANP]. E-mail: carlos@dme.ufcg.edu.br; Silva, Rosana M. da [Universidade Federal de Campina Grande, PB (Brazil). Dept. de Matematica e Estatistica]. E-mail: rosana@dme.ufcg.edu.br

    2004-07-01

    This work presents an implementation of a synthetic model of a channel found in oil reservoir. The generation these models is one of the steps to the characterization and simulation of the equal probable three-dimensional geological scenery. O implemented model was obtained from fitting techniques of geometric modeling of curves and surfaces to the geological parameters (width, thickness, sinuosity and preferential direction) that defines the form to be modeled. The parameter sinuosity is related with the parameter wave length and the local amplitude of the channel, the parameter preferential direction indicates the way of the flow and the declivity of the channel. The modeling technique used to represent the surface of the channel is the sweeping technique, the consist in effectuate a translation operation from a curve along a guide curve. The guide curve, in our implementation, was generated by the interpolation of points obtained form sampled values or simulated of the parameter sinuosity, using the cubic splines of Bezier technique. A semi-ellipse, determinate by the parameter width and thickness, representing a transversal section of the channel, is the transferred curve through the guide curve, generating the channel surface. (author)

  20. Valid and efficient manual estimates of intracranial volume from magnetic resonance images

    International Nuclear Information System (INIS)

    Klasson, Niklas; Olsson, Erik; Rudemo, Mats; Eckerström, Carl; Malmgren, Helge; Wallin, Anders

    2015-01-01

    Manual segmentations of the whole intracranial vault in high-resolution magnetic resonance images are often regarded as very time-consuming. Therefore it is common to only segment a few linearly spaced intracranial areas to estimate the whole volume. The purpose of the present study was to evaluate how the validity of intracranial volume estimates is affected by the chosen interpolation method, orientation of the intracranial areas and the linear spacing between them. Intracranial volumes were manually segmented on 62 participants from the Gothenburg MCI study using 1.5 T, T 1 -weighted magnetic resonance images. Estimates of the intracranial volumes were then derived using subsamples of linearly spaced coronal, sagittal or transversal intracranial areas from the same volumes. The subsamples of intracranial areas were interpolated into volume estimates by three different interpolation methods. The linear spacing between the intracranial areas ranged from 2 to 50 mm and the validity of the estimates was determined by comparison with the entire intracranial volumes. A progressive decrease in intra-class correlation and an increase in percentage error could be seen with increased linear spacing between intracranial areas. With small linear spacing (≤15 mm), orientation of the intracranial areas and interpolation method had negligible effects on the validity. With larger linear spacing, the best validity was achieved using cubic spline interpolation with either coronal or sagittal intracranial areas. Even at a linear spacing of 50 mm, cubic spline interpolation on either coronal or sagittal intracranial areas had a mean absolute agreement intra-class correlation with the entire intracranial volumes above 0.97. Cubic spline interpolation in combination with linearly spaced sagittal or coronal intracranial areas overall resulted in the most valid and robust estimates of intracranial volume. Using this method, valid ICV estimates could be obtained in less than five

  1. Numerical solutions of magnetohydrodynamic stability of axisymmetric toroidal plasmas using cubic B-spline finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, C.Z.

    1988-12-01

    A nonvariational ideal MHD stability code (NOVA) has been developed. In a general flux coordinate (/psi/, theta, /zeta/) system with an arbitrary Jacobian, the NOVA code employs Fourier expansions in the generalized poloidal angle theta and generalized toroidal angle /zeta/ directions, and cubic-B spline finite elements in the radial /psi/ direction. Extensive comparisons with these variational ideal MHD codes show that the NOVA code converges faster and gives more accurate results. An extended version of NOVA is developed to integrate non-Hermitian eigenmode equations due to energetic particles. The set of non-Hermitian integro-differential eigenmode equations is numerically solved by the NOVA-K code. We have studied the problems of the stabilization of ideal MHD internal kink modes by hot particle pressure and the excitation of ''fishbone'' internal kink modes by resonating with the energetic particle magnetic drift frequency. Comparisons with analytical solutions show that the values of the critical ..beta../sub h/ from the analytical theory can be an order of magnitude different from those computed by the NOVA-K code. 24 refs., 11 figs., 1 tab.

  2. Data approximation using a blending type spline construction

    International Nuclear Information System (INIS)

    Dalmo, Rune; Bratlie, Jostein

    2014-01-01

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C k -smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which are necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences

  3. 4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties

    Science.gov (United States)

    Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.

    2018-05-01

    4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved  >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated

  4. Numerical solution of system of boundary value problems using B-spline with free parameter

    Science.gov (United States)

    Gupta, Yogesh

    2017-01-01

    This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.

  5. Self-organizing map analysis using multivariate data from theophylline powders predicted by a thin-plate spline interpolation.

    Science.gov (United States)

    Yasuda, Akihito; Onuki, Yoshinori; Kikuchi, Shingo; Takayama, Kozo

    2010-11-01

    The quality by design concept in pharmaceutical formulation development requires establishment of a science-based rationale and a design space. We integrated thin-plate spline (TPS) interpolation and Kohonen's self-organizing map (SOM) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline powders were prepared based on the standard formulation. The angle of repose, compressibility, cohesion, and dispersibility were measured as the response variables. These responses were predicted quantitatively on the basis of a nonlinear TPS. A large amount of data on these powders was generated and classified into several clusters using an SOM. The experimental values of the responses were predicted with high accuracy, and the data generated for the powders could be classified into several distinctive clusters. The SOM feature map allowed us to analyze the global and local correlations between causal factors and powder characteristics. For instance, the quantities of microcrystalline cellulose (MCC) and magnesium stearate (Mg-St) were classified distinctly into each cluster, indicating that the quantities of MCC and Mg-St were crucial for determining the powder characteristics. This technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline powder formulations. © 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  6. Stabilized Discretization in Spline Element Method for Solution of Two-Dimensional Navier-Stokes Problems

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2014-01-01

    Full Text Available In terms of the poor geometric adaptability of spline element method, a geometric precision spline method, which uses the rational Bezier patches to indicate the solution domain, is proposed for two-dimensional viscous uncompressed Navier-Stokes equation. Besides fewer pending unknowns, higher accuracy, and computation efficiency, it possesses such advantages as accurate representation of isogeometric analysis for object boundary and the unity of geometry and analysis modeling. Meanwhile, the selection of B-spline basis functions and the grid definition is studied and a stable discretization format satisfying inf-sup conditions is proposed. The degree of spline functions approaching the velocity field is one order higher than that approaching pressure field, and these functions are defined on one-time refined grid. The Dirichlet boundary conditions are imposed through the Nitsche variational principle in weak form due to the lack of interpolation properties of the B-splines functions. Finally, the validity of the proposed method is verified with some examples.

  7. Tomography for two-dimensional gas temperature distribution based on TDLAS

    Science.gov (United States)

    Luo, Can; Wang, Yunchu; Xing, Fei

    2018-03-01

    Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.

  8. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.

  9. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    Science.gov (United States)

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  10. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  11. Improved Leg Tracking Considering Gait Phase and Spline-Based Interpolation during Turning Motion in Walk Tests

    Directory of Open Access Journals (Sweden)

    Ayanori Yorozu

    2015-09-01

    Full Text Available Falling is a common problem in the growing elderly population, and fall-risk assessment systems are needed for community-based fall prevention programs. In particular, the timed up and go test (TUG is the clinical test most often used to evaluate elderly individual ambulatory ability in many clinical institutions or local communities. This study presents an improved leg tracking method using a laser range sensor (LRS for a gait measurement system to evaluate the motor function in walk tests, such as the TUG. The system tracks both legs and measures the trajectory of both legs. However, both legs might be close to each other, and one leg might be hidden from the sensor. This is especially the case during the turning motion in the TUG, where the time that a leg is hidden from the LRS is longer than that during straight walking and the moving direction rapidly changes. These situations are likely to lead to false tracking and deteriorate the measurement accuracy of the leg positions. To solve these problems, a novel data association considering gait phase and a Catmull–Rom spline-based interpolation during the occlusion are proposed. From the experimental results with young people, we confirm   that the proposed methods can reduce the chances of false tracking. In addition, we verify the measurement accuracy of the leg trajectory compared to a three-dimensional motion analysis system (VICON.

  12. Self-organizing map analysis using multivariate data from theophylline tablets predicted by a thin-plate spline interpolation.

    Science.gov (United States)

    Yasuda, Akihito; Onuki, Yoshinori; Obata, Yasuko; Yamamoto, Rie; Takayama, Kozo

    2013-01-01

    The "quality by design" concept in pharmaceutical formulation development requires the establishment of a science-based rationale and a design space. We integrated thin-plate spline (TPS) interpolation and Kohonen's self-organizing map (SOM) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline tablets were prepared based on a standard formulation. The tensile strength, disintegration time, and stability of these variables were measured as response variables. These responses were predicted quantitatively based on nonlinear TPS. A large amount of data on these tablets was generated and classified into several clusters using an SOM. The experimental values of the responses were predicted with high accuracy, and the data generated for the tablets were classified into several distinct clusters. The SOM feature map allowed us to analyze the global and local correlations between causal factors and tablet characteristics. The results of this study suggest that increasing the proportion of microcrystalline cellulose (MCC) improved the tensile strength and the stability of tensile strength of these theophylline tablets. In addition, the proportion of MCC has an optimum value for disintegration time and stability of disintegration. Increasing the proportion of magnesium stearate extended disintegration time. Increasing the compression force improved tensile strength, but degraded the stability of disintegration. This technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline tablet formulations.

  13. Basis calculation of phase cross section library in a low power fast reactor neutronic simulation

    International Nuclear Information System (INIS)

    Jachic, J.

    1993-09-01

    In order to implement the utilization of the efficient multidimensional cubic SPLINE interpolation, we determine the phase library bases for net like relevant state components. A generic cubic surface and a weighted plane pertinent alternative interpolating methods used capable to generate cross sections values for fixed coordinates from cell code calculated data points is used. It is verified that the phase library bases increases or decrease smoothly and monotonically with the spectrum asymmetry and total flux buckling. This justifies its use in cross section updating avoiding cell calculations. (author)

  14. An adaptive interpolation scheme for molecular potential energy surfaces

    Science.gov (United States)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  15. Interpolation-Based Condensation Model Reduction Part 1: Frequency Window Reduction Method Application to Structural Acoustics

    National Research Council Canada - National Science Library

    Ingel, R

    1999-01-01

    ... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...

  16. TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation.

    Science.gov (United States)

    Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang

    2010-02-01

    We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  17. USING SPLINE FUNCTIONS FOR THE SUBSTANTIATION OF TAX POLICIES BY LOCAL AUTHORITIES

    Directory of Open Access Journals (Sweden)

    Otgon Cristian

    2011-07-01

    -order, Hermite spline and cubic splines of class C2 .

  18. Interpolation of vector fields from human cardiac DT-MRI

    International Nuclear Information System (INIS)

    Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H

    2011-01-01

    There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.

  19. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    Science.gov (United States)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With

  20. CUDA Accelerated Multi-domain Volumetric Image Segmentation and Using a Higher Order Level Set Method

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Anton, François; Zhang, Qin

    2009-01-01

    -manding in terms of computation and memory space, we employ a CUDA based fast GPU segmentation and provide accuracy measures compared with an equivalent CPU implementation. Our resulting surfaces are C2-smooth resulting from tri-cubic spline interpolation algorithm. We also provide error bounds...

  1. Systematic investigation of gridding-related scaling effects on annual statistics of daily temperature and precipitation maxima: A case study for south-east Australia

    OpenAIRE

    Francia B. Avila; Siyan Dong; Kaah P. Menang; Jan Rajczak; Madeleine Renom; Markus G. Donat; Lisa V. Alexander

    2015-01-01

    Using daily station observations over the period 1951–2013 in a region of south-east Australia, we systematically compare how the horizontal resolution, interpolation method and order of operation in generating gridded data sets affect estimates of annual extreme indices of temperature and precipitation maxima (hottest and wettest days). Three interpolation methods (natural neighbors, cubic spline and angular distance weighting) are used to calculate grids at five different horizontal gridded...

  2. Evaluation of optimization methods for nonrigid medical image registration using mutual information and B-splines

    NARCIS (Netherlands)

    Klein, S.; Staring, M.; Pluim, J.P.W.

    2007-01-01

    A popular technique for nonrigid registration of medical images is based on the maximization of their mutual information, in combination with a deformation field parameterized by cubic B-splines. The coordinate mapping that relates the two images is found using an iterative optimization procedure.

  3. A temporal interpolation approach for dynamic reconstruction in perfusion CT

    International Nuclear Information System (INIS)

    Montes, Pau; Lauritsch, Guenter

    2007-01-01

    This article presents a dynamic CT reconstruction algorithm for objects with time dependent attenuation coefficient. Projection data acquired over several rotations are interpreted as samples of a continuous signal. Based on this idea, a temporal interpolation approach is proposed which provides the maximum temporal resolution for a given rotational speed of the CT scanner. Interpolation is performed using polynomial splines. The algorithm can be adapted to slow signals, reducing the amount of data acquired and the computational cost. A theoretical analysis of the approximations made by the algorithm is provided. In simulation studies, the temporal interpolation approach is compared with three other dynamic reconstruction algorithms based on linear regression, linear interpolation, and generalized Parker weighting. The presented algorithm exhibits the highest temporal resolution for a given sampling interval. Hence, our approach needs less input data to achieve a certain quality in the reconstruction than the other algorithms discussed or, equivalently, less x-ray exposure and computational complexity. The proposed algorithm additionally allows the possibility of using slow rotating scanners for perfusion imaging purposes

  4. Hybrid B-Spline Collocation Method for Solving the Generalized Burgers-Fisher and Burgers-Huxley Equations

    Directory of Open Access Journals (Sweden)

    Imtiaz Wasim

    2018-01-01

    Full Text Available In this study, we introduce a new numerical technique for solving nonlinear generalized Burgers-Fisher and Burgers-Huxley equations using hybrid B-spline collocation method. This technique is based on usual finite difference scheme and Crank-Nicolson method which are used to discretize the time derivative and spatial derivatives, respectively. Furthermore, hybrid B-spline function is utilized as interpolating functions in spatial dimension. The scheme is verified unconditionally stable using the Von Neumann (Fourier method. Several test problems are considered to check the accuracy of the proposed scheme. The numerical results are in good agreement with known exact solutions and the existing schemes in literature.

  5. Selection of the optimal interpolation method for groundwater observations in lahore, pakistan

    International Nuclear Information System (INIS)

    Mahmood, K.; Ali, S.R.; Haider, A.; Tehseen, T.; Kanwal, S.

    2014-01-01

    This study was carried out to find an optimum method of interpolation for the depth values of groundwater in Lahore metropolitan, Pakistan. The methods of interpolation considered in the study were inverse distance weight (IDW), spline, simple Kriging, ordinary Kriging and universal Kriging. Initial analysis of the data suggests that the data was negatively skewed with value of skewness -1.028. The condition of normality was approximated by transforming the data using a box-cox transformation with lambda value of 3.892; the skewness value reduced to -0.00079. The results indicate that simple Kriging method is optimum for interpolation of groundwater observations for the used dataset with lowest bias of 0.00997, highest correlation coefficient with value 0.9434, mean absolute error 1.95 and root mean square error 3.19 m. Smooth and uniform contours with well described central depression zon in the city, as suggested by this studies, also supports the optimised interpolation method. (author)

  6. B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation

    Directory of Open Access Journals (Sweden)

    Frederic Precioso

    2002-06-01

    Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.

  7. Research on Electronic Transformer Data Synchronization Based on Interpolation Methods and Their Error Analysis

    Directory of Open Access Journals (Sweden)

    Pang Fubin

    2015-09-01

    Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.

  8. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Energy Technology Data Exchange (ETDEWEB)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com [Solar Energy Research Institute (SERI), Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my; Wong, J., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my [Unit Penyelidikan Rumpai Laut (UPRL), Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia); Sulaiman, J., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md., E-mail: ysuhaimi@ums.edu.my, E-mail: hafidzruslan@eng.ukm.my [Program Matematik dengan Ekonomi, Sekolah Sains dan Teknologi, Universiti Malaysia Sabah, 88400 Kota Kinabalu, Sabah (Malaysia)

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  9. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    International Nuclear Information System (INIS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-01-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m 2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R 2 ), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested

  10. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    Science.gov (United States)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  11. UV- Radiation Absorption by Ozone in a Model Atmosphere using ...

    African Journals Online (AJOL)

    UV- radiation absorption is studied through variation of ozone transmittance with altitude in the atmosphere for radiation in the 9.6μm absorption band using Goody's model atmosphere with cubic spline interpolation technique to improve the quality of the curve. The data comprising of pressure and temperature at different ...

  12. Spline and spline wavelet methods with applications to signal and image processing

    CERN Document Server

    Averbuch, Amir Z; Zheludev, Valery A

    This volume provides universal methodologies accompanied by Matlab software to manipulate numerous signal and image processing applications. It is done with discrete and polynomial periodic splines. Various contributions of splines to signal and image processing from a unified perspective are presented. This presentation is based on Zak transform and on Spline Harmonic Analysis (SHA) methodology. SHA combines approximation capabilities of splines with the computational efficiency of the Fast Fourier transform. SHA reduces the design of different spline types such as splines, spline wavelets (SW), wavelet frames (SWF) and wavelet packets (SWP) and their manipulations by simple operations. Digital filters, produced by wavelets design process, give birth to subdivision schemes. Subdivision schemes enable to perform fast explicit computation of splines' values at dyadic and triadic rational points. This is used for signals and images upsampling. In addition to the design of a diverse library of splines, SW, SWP a...

  13. Optimized Quasi-Interpolators for Image Reconstruction.

    Science.gov (United States)

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  14. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    Science.gov (United States)

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  15. Gauss-Galerkin quadrature rules for quadratic and cubic spline spaces and their application to isogeometric analysis

    KAUST Repository

    Barton, Michael; Calo, Victor M.

    2016-01-01

    We introduce Gaussian quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. By definition, these spaces are of even degrees. The optimal quadrature rules we recently derived

  16. Statistical properties of turbulence: An overview

    Indian Academy of Sciences (India)

    the turbulent advection of passive scalars, turbulence in the one-dimensional Burgers equation, and fluid turbulence in the presence of polymer ... However, it is not easy to state what would consti- tute a solution of the turbulence ...... flow with Lagrangian tracers and use a cubic spline interpolation method to calculate their ...

  17. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  18. Department of Physics and Mathematical Sciences, Tai Solarin ...

    African Journals Online (AJOL)

    Variations in ozone transmittance with height in the atmosphere for radiation in the 9.6μm absorption band was studied using Goody's model atmosphere with cubic spline interpolation technique to improve the quality of the curve. The data comprised pressure and temperature at different altitudes (0-22 km) for the months of ...

  19. Ozone transmittance in a model atmosphere at Ikeja, Lagos state ...

    African Journals Online (AJOL)

    Variation of ozone transmittance with height in the atmosphere for radiation in the 9.6m absorption band was studied using Goody's model atmosphere, with cubic spline interpolation technique to improve the quality of the curve. The data comprising of pressure and temperature at different altitudes (0-22 km) for the month of ...

  20. XRLINE, a program to evaluate the crystallite size of supported metal catalysts by single X-ray profile Fourier analysis

    International Nuclear Information System (INIS)

    Aldea, N.; Indrea, E.

    1990-01-01

    The computer program presented is based on the Fourier analysis of a singel X-ray diffraction profile. An X-ray diffraction method is presented which is capable of determining the average particle size, microstrain, stacking fault probability as well as the particle size distribution function in crystalline materials. The main numerical methods used are: (i) Smoothing and interpolation by 3rd-order piecewise polynomial functions or by cubic splines with the least squares method; (ii) numerical integration by successive five points formulae and numerical derivative by cubic splines with the least squares method; (iii) estimation of parameters by the weighted least squares method. The results for supported platinum catalysts used in the H/D isotopic exchange reaction are illustrated. (orig.)

  1. A fourth order spline collocation approach for a business cycle model

    Science.gov (United States)

    Sayfy, A.; Khoury, S.; Ibdah, H.

    2013-10-01

    A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.

  2. Gauss-Galerkin quadrature rules for quadratic and cubic spline spaces and their application to isogeometric analysis

    KAUST Repository

    Barton, Michael

    2016-07-21

    We introduce Gaussian quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. By definition, these spaces are of even degrees. The optimal quadrature rules we recently derived (Bartoň and Calo, 2016) act on spaces of the smallest odd degrees and, therefore, are still slightly sub-optimal. In this work, we derive optimal rules directly for even-degree spaces and therefore further improve our recent result. We use optimal quadrature rules for spaces over two elements as elementary building blocks and use recursively the homotopy continuation concept described in Bartoň and Calo (2016) to derive optimal rules for arbitrary admissible numbers of elements.We demonstrate the proposed methodology on relevant examples, where we derive optimal rules for various even-degree spline spaces. We also discuss convergence of our rules to their asymptotic counterparts, these are the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains.

  3. Diabat Interpolation for Polymorph Free-Energy Differences.

    Science.gov (United States)

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  4. The Asymptotic Behavior of Particle Size Distribution Undergoing Brownian Coagulation Based on the Spline-Based Method and TEMOM Model

    Directory of Open Access Journals (Sweden)

    Qing He

    2018-01-01

    Full Text Available In this paper, the particle size distribution is reconstructed using finite moments based on a converted spline-based method, in which the number of linear system of equations to be solved reduced from 4m × 4m to (m + 3 × (m + 3 for (m + 1 nodes by using cubic spline compared to the original method. The results are verified by comparing with the reference firstly. Then coupling with the Taylor-series expansion moment method, the evolution of particle size distribution undergoing Brownian coagulation and its asymptotic behavior are investigated.

  5. Calibration method of microgrid polarimeters with image interpolation.

    Science.gov (United States)

    Chen, Zhenyue; Wang, Xia; Liang, Rongguang

    2015-02-10

    Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.

  6. Resistor mesh model of a spherical head: part 1: applications to scalp potential interpolation.

    Science.gov (United States)

    Chauveau, N; Morucci, J P; Franceries, X; Celsis, P; Rigaud, B

    2005-11-01

    A resistor mesh model (RMM) has been implemented to describe the electrical properties of the head and the configuration of the intracerebral current sources by simulation of forward and inverse problems in electroencephalogram/event related potential (EEG/ERP) studies. For this study, the RMM representing the three basic tissues of the human head (brain, skull and scalp) was superimposed on a spherical volume mimicking the head volume: it included 43 102 resistances and 14 123 nodes. The validation was performed with reference to the analytical model by consideration of a set of four dipoles close to the cortex. Using the RMM and the chosen dipoles, four distinct families of interpolation technique (nearest neighbour, polynomial, splines and lead fields) were tested and compared so that the scalp potentials could be recovered from the electrode potentials. The 3D spline interpolation and the inverse forward technique (IFT) gave the best results. The IFT is very easy to use when the lead-field matrix between scalp electrodes and cortex nodes has been calculated. By simple application of the Moore-Penrose pseudo inverse matrix to the electrode cap potentials, a set of current sources on the cortex is obtained. Then, the forward problem using these cortex sources renders all the scalp potentials.

  7. Fuzzy topological digital space and digital fuzzy spline of electroencephalography during epileptic seizures

    Science.gov (United States)

    Shah, Mazlina Muzafar; Wahab, Abdul Fatah

    2017-08-01

    Epilepsy disease occurs because of there is a temporary electrical disturbance in a group of brain cells (nurons). The recording of electrical signals come from the human brain which can be collected from the scalp of the head is called Electroencephalography (EEG). EEG then considered in digital format and in fuzzy form makes it a fuzzy digital space data form. The purpose of research is to identify the area (curve and surface) in fuzzy digital space affected by inside epilepsy seizure in epileptic patient's brain. The main focus for this research is to generalize fuzzy topological digital space, definition and basic operation also the properties by using digital fuzzy set and the operations. By using fuzzy digital space, the theory of digital fuzzy spline can be introduced to replace grid data that has been use previously to get better result. As a result, the flat of EEG can be fuzzy topological digital space and this type of data can be use to interpolate the digital fuzzy spline.

  8. Coupled B-snake grids and constrained thin-plate splines for analysis of 2-D tissue deformations from tagged MRI.

    Science.gov (United States)

    Amini, A A; Chen, Y; Curwen, R W; Mani, V; Sun, J

    1998-06-01

    Magnetic resonance imaging (MRI) is unique in its ability to noninvasively and selectively alter tissue magnetization and create tagged patterns within a deforming body such as the heart muscle. The resulting patterns define a time-varying curvilinear coordinate system on the tissue, which we track with coupled B-snake grids. B-spline bases provide local control of shape, compact representation, and parametric continuity. Efficient spline warps are proposed which warp an area in the plane such that two embedded snake grids obtained from two tagged frames are brought into registration, interpolating a dense displacement vector field. The reconstructed vector field adheres to the known displacement information at the intersections, forces corresponding snakes to be warped into one another, and for all other points in the plane, where no information is available, a C1 continuous vector field is interpolated. The implementation proposed in this paper improves on our previous variational-based implementation and generalizes warp methods to include biologically relevant contiguous open curves, in addition to standard landmark points. The methods are validated with a cardiac motion simulator, in addition to in-vivo tagging data sets.

  9. Spline techniques for magnetic fields

    International Nuclear Information System (INIS)

    Aspinall, J.G.

    1984-01-01

    This report is an overview of B-spline techniques, oriented toward magnetic field computation. These techniques form a powerful mathematical approximating method for many physics and engineering calculations. In section 1, the concept of a polynomial spline is introduced. Section 2 shows how a particular spline with well chosen properties, the B-spline, can be used to build any spline. In section 3, the description of how to solve a simple spline approximation problem is completed, and some practical examples of using splines are shown. All these sections deal exclusively in scalar functions of one variable for simplicity. Section 4 is partly digression. Techniques that are not B-spline techniques, but are closely related, are covered. These methods are not needed for what follows, until the last section on errors. Sections 5, 6, and 7 form a second group which work toward the final goal of using B-splines to approximate a magnetic field. Section 5 demonstrates how to approximate a scalar function of many variables. The necessary mathematics is completed in section 6, where the problems of approximating a vector function in general, and a magnetic field in particular, are examined. Finally some algorithms and data organization are shown in section 7. Section 8 deals with error analysis

  10. Spline-based automatic path generation of welding robot

    Institute of Scientific and Technical Information of China (English)

    Niu Xuejuan; Li Liangyu

    2007-01-01

    This paper presents a flexible method for the representation of welded seam based on spline interpolation. In this method, the tool path of welding robot can be generated automatically from a 3D CAD model. This technique has been implemented and demonstrated in the FANUC Arc Welding Robot Workstation. According to the method, a software system is developed using VBA of SolidWorks 2006. It offers an interface between SolidWorks and ROBOGUIDE, the off-line programming software of FANUC robot. It combines the strong modeling function of the former and the simulating function of the latter. It also has the capability of communication with on-line robot. The result data have shown its high accuracy and strong reliability in experiments. This method will improve the intelligence and the flexibility of the welding robot workstation.

  11. Analysis of moderately thin-walled beam cross-sections by cubic isoparametric elements

    DEFF Research Database (Denmark)

    Høgsberg, Jan Becker; Krenk, Steen

    2014-01-01

    In technical beam theory the six equilibrium states associated with homogeneous tension, bending, shear and torsion are treated as individual load cases. This enables the formulation of weak form equations governing the warping from shear and torsion. These weak form equations are solved...... numerically by introducing a cubic-linear two-dimensional isoparametric element. The cubic interpolation of this element accurately represents quadratic shear stress variations along cross-section walls, and thus moderately thin-walled cross-sections are effectively discretized by these elements. The ability...

  12. Comparison of spatial interpolation techniques to predict soil properties in the colombian piedmont eastern plains

    Directory of Open Access Journals (Sweden)

    Mauricio Castro Franco

    2017-07-01

    Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.

  13. An improved local radial point interpolation method for transient heat conduction analysis

    Science.gov (United States)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  14. An improved local radial point interpolation method for transient heat conduction analysis

    International Nuclear Information System (INIS)

    Wang Feng; Lin Gao; Hu Zhi-Qiang; Zheng Bao-Jing

    2013-01-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions

  15. Local and Global Path Generation for Autonomous Vehicles Using SplinesGeneración Local y Global de Trayectorias para Vehículo Autónomos Usando Splines

    Directory of Open Access Journals (Sweden)

    Randerson Lemos

    2016-05-01

    Full Text Available Abstract Context: Before autonomous vehicles being a reality in daily situations, outstanding issues regarding the techniques of autonomous mobility must be solved. Hence, relevant aspects of a path planning for terrestrial vehicles are shown. Method: The approached path planning technique uses splines to generate the global route. For this goal, waypoints obtained from online map services are used. With the global route parametrized in the arc-length, candidate local paths are computed and the optimal one is selected by cost functions. Results: Different routes are used to show that the number and distribution of waypoints are highly correlated to a satisfactory arc-length parameterization of the global route, which is essential to the proper behavior of the path planning technique. Conclusions: The cubic splines approach to the path planning problem successfully generates the global and local paths. Nevertheless, the use of raw data from the online map services showed to be unfeasible due the consistency of the data. Hence, a preprocessing stage of the raw data is proposed to guarantee the well behavior and robustness of the technique.

  16. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  17. Isogeometric analysis using T-splines

    KAUST Repository

    Bazilevs, Yuri

    2010-01-01

    We explore T-splines, a generalization of NURBS enabling local refinement, as a basis for isogeometric analysis. We review T-splines as a surface design methodology and then develop it for engineering analysis applications. We test T-splines on some elementary two-dimensional and three-dimensional fluid and structural analysis problems and attain good results in all cases. We summarize the current status of T-splines, their limitations, and future possibilities. © 2009 Elsevier B.V.

  18. Comparison of parametric, orthogonal, and spline functions to model individual lactation curves for milk yield in Canadian Holsteins

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-11-01

    Full Text Available Test day records for milk yield of 57,390 first lactation Canadian Holsteins were analyzed with a linear model that included the fixed effects of herd-test date and days in milk (DIM interval nested within age and calving season. Residuals from this model were analyzed as a new variable and fitted with a five parameter model, fourth-order Legendre polynomials, with linear, quadratic and cubic spline models with three knots. The fit of the models was rather poor, with about 30-40% of the curves showing an adjusted R-square lower than 0.20 across all models. Results underline a great difficulty in modelling individual deviations around the mean curve for milk yield. However, the Ali and Schaeffer (5 parameter model and the fourth-order Legendre polynomials were able to detect two basic shapes of individual deviations among the mean curve. Quadratic and, especially, cubic spline functions had better fitting performances but a poor predictive ability due to their great flexibility that results in an abrupt change of the estimated curve when data are missing. Parametric and orthogonal polynomials seem to be robust and affordable under this standpoint.

  19. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    Energy Technology Data Exchange (ETDEWEB)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Pullepu, Babuji [S R M University, Chennai (India)

    2015-05-15

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  20. Free vibration of symmetric angle ply truncated conical shells under different boundary conditions using spline method

    International Nuclear Information System (INIS)

    Viswanathan, K. K.; Aziz, Z. A.; Javed, Saira; Yaacob, Y.; Pullepu, Babuji

    2015-01-01

    Free vibration of symmetric angle-ply laminated truncated conical shell is analyzed to determine the effects of frequency parameter and angular frequencies under different boundary condition, ply angles, different material properties and other parameters. The governing equations of motion for truncated conical shell are obtained in terms of displacement functions. The displacement functions are approximated by cubic and quintic splines resulting into a generalized eigenvalue problem. The parametric studies have been made and discussed.

  1. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  2. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    Science.gov (United States)

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  3. NeuroMap: A spline-based interactive open-source software for spatiotemporal mapping of 2D and 3D MEA data

    Directory of Open Access Journals (Sweden)

    Oussama eAbdoun

    2011-01-01

    Full Text Available A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA technology. Indeed, high-density MEAs provide large-scale covering (several mm² of whole neural structures combined with microscopic resolution (about 50µm of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid deformation based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License (GPL and available at http://sites.google.com/site/neuromapsoftware.

  4. NeuroMap: A Spline-Based Interactive Open-Source Software for Spatiotemporal Mapping of 2D and 3D MEA Data.

    Science.gov (United States)

    Abdoun, Oussama; Joucla, Sébastien; Mazzocco, Claire; Yvert, Blaise

    2011-01-01

    A major characteristic of neural networks is the complexity of their organization at various spatial scales, from microscopic local circuits to macroscopic brain-scale areas. Understanding how neural information is processed thus entails the ability to study them at multiple scales simultaneously. This is made possible using microelectrodes array (MEA) technology. Indeed, high-density MEAs provide large-scale coverage (several square millimeters) of whole neural structures combined with microscopic resolution (about 50 μm) of unit activity. Yet, current options for spatiotemporal representation of MEA-collected data remain limited. Here we present NeuroMap, a new interactive Matlab-based software for spatiotemporal mapping of MEA data. NeuroMap uses thin plate spline interpolation, which provides several assets with respect to conventional mapping methods used currently. First, any MEA design can be considered, including 2D or 3D, regular or irregular, arrangements of electrodes. Second, spline interpolation allows the estimation of activity across the tissue with local extrema not necessarily at recording sites. Finally, this interpolation approach provides a straightforward analytical estimation of the spatial Laplacian for better current sources localization. In this software, coregistration of 2D MEA data on the anatomy of the neural tissue is made possible by fine matching of anatomical data with electrode positions using rigid-deformation-based correction of anatomical pictures. Overall, NeuroMap provides substantial material for detailed spatiotemporal analysis of MEA data. The package is distributed under GNU General Public License and available at http://sites.google.com/site/neuromapsoftware.

  5. Smooth Phase Interpolated Keying

    Science.gov (United States)

    Borah, Deva K.

    2007-01-01

    that result in equal numbers of clockwise and counter-clockwise phase rotations for equally likely symbols. The purpose served by assigning phase values in this way is to prevent unnecessary generation of spectral lines and prevent net shifts of the carrier signal. In the phase-interpolation step, the smooth phase values are interpolated over a number, n, of consecutive symbols (including the present symbol) by means of an unconventional spline curve fit.

  6. Symmetric, discrete fractional splines and Gabor systems

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2006-01-01

    In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing the continu......In this paper we consider fractional splines as windows for Gabor frames. We introduce two new types of symmetric, fractional splines in addition to one found by Unser and Blu. For the finite, discrete case we present two families of splines: One is created by sampling and periodizing...... the continuous splines, and one is a truly finite, discrete construction. We discuss the properties of these splines and their usefulness as windows for Gabor frames and Wilson bases....

  7. 基于GIS的新疆气温数据栅格化方法研究%GIS-based spatial interpolation of air temperature in Xinjiang

    Institute of Scientific and Technical Information of China (English)

    陈鹏翔; 毛炜峰

    2012-01-01

    以新疆99个气象台站1971-2010年年平均气温为数据源,采用多元回归结合空间插值的方法对新疆区域气温数据进行栅格化研究.建立了年平均气温与台站经纬度和海拔高度的多元回归模型,对于残差数据的插值采用了反距离权重法(IDW)、普通克立格法(Kriging)和样来函数法(Spline)3种目前应用广泛的空间插值方法,针对于这3种方法进行了基于MAE和RMSIE的交叉验证和对比分析,结果表明在新疆的年平均气温的GIS插值方案中,IDW方法精度总体要高于其他两种插值方法.%With Surfer, Grads as a platform for direct space interpolation was widely used in meteorological rasterization of air temperature data, whatever the spatial interpolation technique ( Spline, 1DW, Lagrangian, Hennite interpolation, etc. ) , do not take into account the effects of topography on the air temperature distribution, In recent years with the expansion of GIS technology applications, the method of regression model by geographic factors (elevation, longitude, latitude, etc. ) combined with spatial interpolation was used in grid-ba3ed regional climate factors and get good results. In this paper, used regression analysis methods combined with GIS spatial interpolation to rasterization of year mean air temperatures from 1971 to2010 in Xinjiang area, the 99 meteorological stations(10 of them in order to verify) that has complete observations involved in the calculation. We use the following method for air tempcrature data rasterization in Xinjiang region, Firstly, establish the average temperature multiple regression model with the air temperature data that measured by weather station (excluding test station) for the output variables, and the longitude grid data, latitude grid data and altitude grid dala of meteorological stations for the input variables, obtain the regression equation and the temperature residuals data for each weather station; Secondly, calculate the air

  8. Validation of China-wide interpolated daily climate variables from 1960 to 2011

    Science.gov (United States)

    Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang

    2015-02-01

    Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based

  9. Splines employment for inverse problem of nonstationary thermal conduction

    International Nuclear Information System (INIS)

    Nikonov, S.P.; Spolitak, S.I.

    1985-01-01

    An analytical solution has been obtained for an inverse problem of nonstationary thermal conduction which is faced in nonstationary heat transfer data processing when the rewetting in channels with uniform annular fuel element imitators is investigated. In solving the problem both boundary conditions and power density within the imitator are regularized via cubic splines constructed with the use of Reinsch algorithm. The solution can be applied for calculation of temperature distribution in the imitator and the heat flux in two-dimensional approximation (r-z geometry) under the condition that the rewetting front velocity is known, and in one-dimensional r-approximation in cases with negligible axial transport or when there is a lack of data about the temperature disturbance source velocity along the channel

  10. A mathematical approach for estimating light absorption by a crop from continuous radiation measurements and restricted absorption data

    International Nuclear Information System (INIS)

    Zanetti, P.; Delfine, S.; Alvino, A.

    1999-01-01

    A sunflower (Helianthus annuus L.) crop was grown with four different water regimes to obtain different canopy growth and light absorption capability. The incoming solar radiation was recorded by an agrometeorological field station, while the percentage absorbed by the crop was measured by a ceptometer at four times and on a quasi-daily basis over the all reproductive phases. Triangulation on these data points and cubic interpolation was used to model the radiation absorbed by the canopies over time. In order to validate this approach, the procedure was also applied to a small subset of the data. Numerical quadrature based on an adaptive recursive Simpson’s rule was used to integrate the radiation absorbed by the canopies. The numerical quadrature was applied (i) to the whole data collected, represented by a cubic two-dimensional spline interpolation function, and (ii) to the interpolated values obtained from the restricted data set. The differences between (i) and (ii) for the four water regimes varied from 3.6 to 5.2% approximately. These comparisons demonstrated the potential of a restricted data interpolation model for investigating the complex phenomena of light interception by canopies with different plant structure. (author)

  11. SEDRX: A computer program for the simulation Si(Li) and Ge(Hp) x-ray detectors efficiency

    International Nuclear Information System (INIS)

    Benamar, M.A.; Benouali, A.; Tchantchane, A.; Azbouche, A.; Tobbeche, S. Centre de Developpement des Techniques Nucleaires, Algiers; Labo. des Techniques Nucleaires)

    1992-12-01

    The difficulties encountered in measuring the x-ray detectors efficiency has motivated to develop a computer program to simulate this parameter. this program computes the efficiency of detectors as a function of energy. the computation of this parameter is based on the fitting coefficients of absorption in the case of photoelectric, coherent and incoherent factors. These coefficients are given by Mc Master library or may be determined by the interpolation based on cubic splines

  12. The Control Based on Internal Average Kinetic Energy in Complex Environment for Multi-robot System

    Science.gov (United States)

    Yang, Mao; Tian, Yantao; Yin, Xianghua

    In this paper, reference trajectory is designed according to minimum energy consumed for multi-robot system, which nonlinear programming and cubic spline interpolation are adopted. The control strategy is composed of two levels, which lower-level is simple PD control and the upper-level is based on the internal average kinetic energy for multi-robot system in the complex environment with velocity damping. Simulation tests verify the effectiveness of this control strategy.

  13. Geometric and computer-aided spline hob modeling

    Science.gov (United States)

    Brailov, I. G.; Myasoedova, T. M.; Panchuk, K. L.; Krysova, I. V.; Rogoza, YU A.

    2018-03-01

    The paper considers acquiring the spline hob geometric model. The objective of the research is the development of a mathematical model of spline hob for spline shaft machining. The structure of the spline hob is described taking into consideration the motion in parameters of the machine tool system of cutting edge positioning and orientation. Computer-aided study is performed with the use of CAD and on the basis of 3D modeling methods. Vector representation of cutting edge geometry is accepted as the principal method of spline hob mathematical model development. The paper defines the correlations described by parametric vector functions representing helical cutting edges designed for spline shaft machining with consideration for helical movement in two dimensions. An application for acquiring the 3D model of spline hob is developed on the basis of AutoLISP for AutoCAD environment. The application presents the opportunity for the use of the acquired model for milling process imitation. An example of evaluation, analytical representation and computer modeling of the proposed geometrical model is reviewed. In the mentioned example, a calculation of key spline hob parameters assuring the capability of hobbing a spline shaft of standard design is performed. The polygonal and solid spline hob 3D models are acquired by the use of imitational computer modeling.

  14. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    Science.gov (United States)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  15. An integral conservative gridding-algorithm using Hermitian curve interpolation

    International Nuclear Information System (INIS)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-01-01

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  16. Reconstruction calculation of pin power for ship reactor core

    International Nuclear Information System (INIS)

    Li Haofeng; Shang Xueli; Chen Wenzhen; Wang Qiao

    2010-01-01

    Aiming at the limitation of the software that pin power distribution for ship reactor core was unavailable, the calculation model and method of the axial and radial pin power distribution were proposed. Reconstruction calculations of pin power along axis and radius was carried out by bicubic and bilinear interpolation and cubic spline interpolation, respectively. The results were compared with those obtained by professional reactor physical soft with fine mesh difference. It is shown that our reconstruction calculation of pin power is simple and reliable as well as accurate, which provides an important theoretic base for the safety analysis and operating administration of the ship nuclear reactor. (authors)

  17. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

    Science.gov (United States)

    Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

    2013-01-01

    Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra

  18. Cardinal Basis Piecewise Hermite Interpolation on Fuzzy Data

    Directory of Open Access Journals (Sweden)

    H. Vosoughi

    2016-01-01

    Full Text Available A numerical method along with explicit construction to interpolation of fuzzy data through the extension principle results by widely used fuzzy-valued piecewise Hermite polynomial in general case based on the cardinal basis functions, which satisfy a vanishing property on the successive intervals, has been introduced here. We have provided a numerical method in full detail using the linear space notions for calculating the presented method. In order to illustrate the method in computational examples, we take recourse to three prime cases: linear, cubic, and quintic.

  19. Straight-sided Spline Optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using design modifications, that do not change the spline load carrying capacity, it is shown that large...

  20. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.

    2010-01-01

    Time series associated with single-molecule experiments and/or simulations contain a wealth of multiscale information about complex biomolecular systems. We demonstrate how a collection of Penalized-splines (P-splines) can be useful in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between curves associated with different SDEs partially reflects noise induced by motion evolving on a slower time scale. P-splines assist in "semiparametrically" estimating nonlinear SDEs in situations where a time-dependent external force is applied to a single-molecule system. The P-splines introduced simultaneously use function and derivative scatterplot information to refine curve estimates. We refer to the approach as the PuDI (P-splines using Derivative Information) method. It is shown how generalized least squares ideas fit seamlessly into the PuDI method. Applications demonstrating how utilizing uncertainty information/approximations along with generalized least squares techniques improve PuDI fits are presented. Although the primary application here is in estimating nonlinear SDEs, the PuDI method is applicable to situations where both unbiased function and derivative estimates are available.

  1. Joint surface modeling with thin-plate splines.

    Science.gov (United States)

    Boyd, S K; Ronsky, J L; Lichti, D D; Salkauskas, K; Chapman, M A; Salkauskas, D

    1999-10-01

    Mathematical joint surface models based on experimentally determined data points can be used to investigate joint characteristics such as curvature, congruency, cartilage thickness, joint contact areas, as well as to provide geometric information well suited for finite element analysis. Commonly, surface modeling methods are based on B-splines, which involve tensor products. These methods have had success; however, they are limited due to the complex organizational aspect of working with surface patches, and modeling unordered, scattered experimental data points. An alternative method for mathematical joint surface modeling is presented based on the thin-plate spline (TPS). It has the advantage that it does not involve surface patches, and can model scattered data points without experimental data preparation. An analytical surface was developed and modeled with the TPS to quantify its interpolating and smoothing characteristics. Some limitations of the TPS include discontinuity of curvature at exactly the experimental surface data points, and numerical problems dealing with data sets in excess of 2000 points. However, suggestions for overcoming these limitations are presented. Testing the TPS with real experimental data, the patellofemoral joint of a cat was measured with multistation digital photogrammetry and modeled using the TPS to determine cartilage thicknesses and surface curvature. The cartilage thickness distribution ranged between 100 to 550 microns on the patella, and 100 to 300 microns on the femur. It was found that the TPS was an effective tool for modeling joint surfaces because no preparation of the experimental data points was necessary, and the resulting unique function representing the entire surface does not involve surface patches. A detailed algorithm is presented for implementation of the TPS.

  2. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    Science.gov (United States)

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  3. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    Science.gov (United States)

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  4. From symplectic integrator to Poincare map: Spline expansion of a map generator in Cartesian coordinates

    International Nuclear Information System (INIS)

    Warnock, R.L.; Ellison, J.A.; Univ. of New Mexico, Albuquerque, NM

    1997-08-01

    Data from orbits of a symplectic integrator can be interpolated so as to construct an approximation to the generating function of a Poincare map. The time required to compute an orbit of the symplectic map induced by the generator can be much less than the time to follow the same orbit by symplectic integration. The construction has been carried out previously for full-turn maps of large particle accelerators, and a big saving in time (for instance a factor of 60) has been demonstrated. A shortcoming of the work to date arose from the use of canonical polar coordinates, which precluded map construction in small regions of phase space near coordinate singularities. This paper shows that Cartesian coordinates can also be used, thus avoiding singularities. The generator is represented in a basis of tensor product B-splines. Under weak conditions the spline expansion converges uniformly as the mesh is refined, approaching the exact generator of the Poincare map as defined by the symplectic integrator, in some parallelepiped of phase space centered at the origin

  5. Multivariate interpolation

    Directory of Open Access Journals (Sweden)

    Pakhnutov I.A.

    2017-04-01

    Full Text Available the paper deals with iterative interpolation methods in forms of similar recursive procedures defined by a sort of simple functions (interpolation basis not necessarily real valued. These basic functions are kind of arbitrary type being defined just by wish and considerations of user. The studied interpolant construction shows virtue of versatility: it may be used in a wide range of vector spaces endowed with scalar product, no dimension restrictions, both in Euclidean and Hilbert spaces. The choice of basic interpolation functions is as wide as possible since it is subdued nonessential restrictions. The interpolation method considered in particular coincides with traditional polynomial interpolation (mimic of Lagrange method in real unidimensional case or rational, exponential etc. in other cases. The interpolation as iterative process, in fact, is fairly flexible and allows one procedure to change the type of interpolation, depending on the node number in a given set. Linear interpolation basis options (perhaps some nonlinear ones allow to interpolate in noncommutative spaces, such as spaces of nondegenerate matrices, interpolated data can also be relevant elements of vector spaces over arbitrary numeric field. By way of illustration, the author gives the examples of interpolation on the real plane, in the separable Hilbert space and the space of square matrices with vektorvalued source data.

  6. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    Science.gov (United States)

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  7. Optimization and comparison of three spatial interpolation methods for electromagnetic levels in the AM band within an urban area.

    Science.gov (United States)

    Rufo, Montaña; Antolín, Alicia; Paniagua, Jesús M; Jiménez, Antonio

    2018-04-01

    A comparative study was made of three methods of interpolation - inverse distance weighting (IDW), spline and ordinary kriging - after optimization of their characteristic parameters. These interpolation methods were used to represent the electric field levels for three emission frequencies (774kHz, 900kHz, and 1107kHz) and for the electrical stimulation quotient, Q E , characteristic of complex electromagnetic environments. Measurements were made with a spectrum analyser in a village in the vicinity of medium-wave radio broadcasting antennas. The accuracy of the models was quantified by comparing their predictions with levels measured at the control points not used to generate the models. The results showed that optimizing the characteristic parameters of each interpolation method allows any of them to be used. However, the best results in terms of the regression coefficient between each model's predictions and the actual control point field measurements were for the IDW method. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. On Characterization of Quadratic Splines

    DEFF Research Database (Denmark)

    Chen, B. T.; Madsen, Kaj; Zhang, Shuzhong

    2005-01-01

    that the representation can be refined in a neighborhood of a non-degenerate point and a set of non-degenerate minimizers. Based on these characterizations, many existing algorithms for specific convex quadratic splines are also finite convergent for a general convex quadratic spline. Finally, we study the relationship...... between the convexity of a quadratic spline function and the monotonicity of the corresponding LCP problem. It is shown that, although both conditions lead to easy solvability of the problem, they are different in general....

  9. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    error. Cubic B-spline interpolator can achieve comparable efficiency as bicubic interpolator, while quintic B-spline interpolator requires about 1.5 times the running time.

  10. Schwarz and multilevel methods for quadratic spline collocation

    Energy Technology Data Exchange (ETDEWEB)

    Christara, C.C. [Univ. of Toronto, Ontario (Canada); Smith, B. [Univ. of California, Los Angeles, CA (United States)

    1994-12-31

    Smooth spline collocation methods offer an alternative to Galerkin finite element methods, as well as to Hermite spline collocation methods, for the solution of linear elliptic Partial Differential Equations (PDEs). Recently, optimal order of convergence spline collocation methods have been developed for certain degree splines. Convergence proofs for smooth spline collocation methods are generally more difficult than for Galerkin finite elements or Hermite spline collocation, and they require stronger assumptions and more restrictions. However, numerical tests indicate that spline collocation methods are applicable to a wider class of problems, than the analysis requires, and are very competitive to finite element methods, with respect to efficiency. The authors will discuss Schwarz and multilevel methods for the solution of elliptic PDEs using quadratic spline collocation, and compare these with domain decomposition methods using substructuring. Numerical tests on a variety of parallel machines will also be presented. In addition, preliminary convergence analysis using Schwarz and/or maximum principle techniques will be presented.

  11. Flip-avoiding interpolating surface registration for skull reconstruction.

    Science.gov (United States)

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  12. B-spline Collocation with Domain Decomposition Method

    International Nuclear Information System (INIS)

    Hidayat, M I P; Parman, S; Ariwahjoedi, B

    2013-01-01

    A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.

  13. Comparison of the THYC and FLICA-3M codes by the pseudo-cubic thin-plate method

    International Nuclear Information System (INIS)

    Banner, D.; Crecy, F. de.

    1993-06-01

    The pseudo cubic Spline method (PCSM) is a statistical tool developed by the CEA. It is designed to analyse experimental points and in particular thermalhydraulic data. Predictors of the occurrence of critical heat flux are obtained by using Spline functions. In this paper, predictors have been computed from the same CHF databases by using two different flow analyses to derive local thermal-hydraulic variables at the CHF location. In fact, CEA's FLICA-3M represents rod bundles by interconnected subchannels whereas EDF's THYC code uses a porous 3D approach. In a first step, the PCSM is briefly presented as well as the two codes studied here. Then, the comparison methodology is explained in order to prove that advanced analysis of thermalhydraulic codes can be achieved with the PCSM. (authors). 6 figs., 2 tabs., 5 refs

  14. Applications of the spline filter for areal filtration

    International Nuclear Information System (INIS)

    Tong, Mingsi; Zhang, Hao; Ott, Daniel; Chu, Wei; Song, John

    2015-01-01

    This paper proposes a general use isotropic areal spline filter. This new areal spline filter can achieve isotropy by approximating the transmission characteristic of the Gaussian filter. It can also eliminate the effect of void areas using a weighting factor, and resolve end-effect issues by applying new boundary conditions, which replace the first order finite difference in the traditional spline formulation. These improvements make the spline filter widely applicable to 3D surfaces and extend the applications of the spline filter in areal filtration. (technical note)

  15. Construction of local integro quintic splines

    Directory of Open Access Journals (Sweden)

    T. Zhanlav

    2016-06-01

    Full Text Available In this paper, we show that the integro quintic splines can locally be constructed without solving any systems of equations. The new construction does not require any additional end conditions. By virtue of these advantages the proposed algorithm is easy to implement and effective. At the same time, the local integro quintic splines possess as good approximation properties as the integro quintic splines. In this paper, we have proved that our local integro quintic spline has superconvergence properties at the knots for the first and third derivatives. The orders of convergence at the knots are six (not five for the first derivative and four (not three for the third derivative.

  16. Accurate e/sup -/-He cross sections below 19 eV

    Energy Technology Data Exchange (ETDEWEB)

    Nesbet, R K [International Business Machines Corp., San Jose, CA (USA). Research Lab.

    1979-04-14

    Variational calculations of e/sup -/-He s- and p-wave phaseshifts, together with the Born formula for higher partial waves, are used to give the scattering amplitude to within one per cent estimated accuracy for energies less than 19 eV. Coefficients are given of cubic spline fits to auxiliary functions that provide smooth interpolation of the estimated accurate phaseshifts. Data given here make it possible to obtain the differential scattering cross section over the energy range considered from simple formulae.

  17. Multidimensional splines for modeling FET nonlinearities

    Energy Technology Data Exchange (ETDEWEB)

    Barby, J A

    1986-01-01

    Circuit simulators like SPICE and timing simulators like MOTIS are used extensively for critical path verification of integrated circuits. MOSFET model evaluation dominates the run time of these simulators. Changes in technology results in costly updates, since modifications require reprogramming of the functions and their derivatives. The computational cost of MOSFET models can be reduced by using multidimensional polynomial splines. Since simulators based on the Newton Raphson algorithm require the function and first derivative, quadratic splines are sufficient for this purpose. The cost of updating the MOSFET model due to technology changes is greatly reduced since splines are derived from a set of points. Crucial for convergence speed of simulators is the fact that MOSFET characteristic equations are monotonic. This must be maintained by any simulation model. The splines the author designed do maintain monotonicity.

  18. On the feasibility to integrate low-cost MEMS accelerometers and GNSS receivers

    Science.gov (United States)

    Benedetti, Elisa; Dermanis, Athanasios; Crespi, Mattia

    2017-06-01

    The aim of this research was to investigate the feasibility of merging the benefits offered by low-cost GNSS and MEMS accelerometers technology, in order to promote the diffusion of low-cost monitoring solutions. A merging approach was set up at the level of the combination of kinematic results (velocities and displacements) coming from the two kinds of sensors, whose observations were separately processed, following to the so called loose integration, which sounds much more simple and flexible thinking about the possibility of an easy change of the combined sensors. At first, the issues related to the difference in reference systems, time systems and measurement rate and epochs for the two sensors were faced with. An approach was designed and tested to transform into unique reference and time systems the outcomes from GPS and MEMS and to interpolate the usually (much) more dense MEMS observation to common (GPS) epochs. The proposed approach was limited to time-independent (constant) orientation of the MEMS reference system with respect to the GPS one. Then, a data fusion approach based on the use of Discrete Fourier Transform and cubic splines interpolation was proposed both for velocities and displacements: MEMS and GPS derived solutions are firstly separated by a rectangular filter in spectral domain, and secondly back-transformed and combined through a cubic spline interpolation. Accuracies around 5 mm for slow and fast displacements and better than 2 mm/s for velocities were assessed. The obtained solution paves the way to a powerful and appealing use of low-cost single frequency GNSS receivers and MEMS accelerometers for structural and ground monitoring applications. Some additional remarks and prospects for future investigations complete the paper.

  19. Hermitian Mindlin Plate Wavelet Finite Element Method for Load Identification

    Directory of Open Access Journals (Sweden)

    Xiaofeng Xue

    2016-01-01

    Full Text Available A new Hermitian Mindlin plate wavelet element is proposed. The two-dimensional Hermitian cubic spline interpolation wavelet is substituted into finite element functions to construct frequency response function (FRF. It uses a system’s FRF and response spectrums to calculate load spectrums and then derives loads in the time domain via the inverse fast Fourier transform. By simulating different excitation cases, Hermitian cubic spline wavelets on the interval (HCSWI finite elements are used to reverse load identification in the Mindlin plate. The singular value decomposition (SVD method is adopted to solve the ill-posed inverse problem. Compared with ANSYS results, HCSWI Mindlin plate element can accurately identify the applied load. Numerical results show that the algorithm of HCSWI Mindlin plate element is effective. The accuracy of HCSWI can be verified by comparing the FRF of HCSWI and ANSYS elements with the experiment data. The experiment proves that the load identification of HCSWI Mindlin plate is effective and precise by using the FRF and response spectrums to calculate the loads.

  20. LOCALLY REFINED SPLINES REPRESENTATION FOR GEOSPATIAL BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Dokken

    2015-08-01

    Full Text Available When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.

  1. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    Science.gov (United States)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new

  2. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  3. Fundamental Frequency Estimation of the Speech Signal Compressed by MP3 Algorithm Using PCC Interpolation

    Directory of Open Access Journals (Sweden)

    MILIVOJEVIC, Z. N.

    2010-02-01

    Full Text Available In this paper the fundamental frequency estimation results of the MP3 modeled speech signal are analyzed. The estimation of the fundamental frequency was performed by the Picking-Peaks algorithm with the implemented Parametric Cubic Convolution (PCC interpolation. The efficiency of PCC was tested for Catmull-Rom, Greville and Greville two-parametric kernel. Depending on MSE, a window that gives optimal results was chosen.

  4. Recursive B-spline approximation using the Kalman filter

    Directory of Open Access Journals (Sweden)

    Jens Jauch

    2017-02-01

    Full Text Available This paper proposes a novel recursive B-spline approximation (RBA algorithm which approximates an unbounded number of data points with a B-spline function and achieves lower computational effort compared with previous algorithms. Conventional recursive algorithms based on the Kalman filter (KF restrict the approximation to a bounded and predefined interval. Conversely RBA includes a novel shift operation that enables to shift estimated B-spline coefficients in the state vector of a KF. This allows to adapt the interval in which the B-spline function can approximate data points during run-time.

  5. A direct method for numerical solution of a class of nonlinear Volterra integro-differential equations and its application to the nonlinear fission and fusion reactor kinetics

    International Nuclear Information System (INIS)

    Nakahara, Yasuaki; Ise, Takeharu; Kobayashi, Kensuke; Itoh, Yasuyuki

    1975-12-01

    A new method has been developed for numerical solution of a class of nonlinear Volterra integro-differential equations with quadratic nonlinearity. After dividing the domain of the variable into subintervals, piecewise approximations are applied in the subintervals. The equation is first integrated over a subinterval to obtain the piecewise equation, to which six approximate treatments are applied, i.e. fully explicit, fully implicit, Crank-Nicolson, linear interpolation, quadratic and cubic spline. The numerical solution at each time step is obtained directly as a positive root of the resulting algebraic quadratic equation. The point reactor kinetics with a ramp reactivity insertion, linear temperature feedback and delayed neutrons can be described by one of this type of nonlinear Volterra integro-differential equations. The algorithm is applied to the Argonne benchmark problem and a model problem for a fast reactor without delayed neutrons. The fully implicit method has been found to be unconditionally stable in the sense that it always gives the positive real roots. The cubic spline method is divergent, and the other four methods are intermediate in between. From the estimation of the stability, convergency, accuracy and CPU time, it is concluded that the Crank-Nicolson method is best, then the linear interpolation method comes closely next to it. Discussions are also made on the possibility of applying the algorithm to the fusion reactor kinetics in the form of a nonlinear partial differential equation. (auth.)

  6. Optimization of straight-sided spline design

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2011-01-01

    and the subject of improving the design. The present paper concentrates on the optimization of splines and the predictions of stress concentrations, which are determined by finite element analysis (FEA). Using different design modifications, that do not change the spline load carrying capacity, it is shown...

  7. RGB color calibration for quantitative image analysis: the "3D thin-plate spline" warping approach.

    Science.gov (United States)

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data.

  8. Fine-granularity inference and estimations to network traffic for SDN.

    Directory of Open Access Journals (Sweden)

    Dingde Jiang

    Full Text Available An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN. However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.

  9. Fine-granularity inference and estimations to network traffic for SDN.

    Science.gov (United States)

    Jiang, Dingde; Huo, Liuwei; Li, Ya

    2018-01-01

    An end-to-end network traffic matrix is significantly helpful for network management and for Software Defined Networks (SDN). However, the end-to-end network traffic matrix's inferences and estimations are a challenging problem. Moreover, attaining the traffic matrix in high-speed networks for SDN is a prohibitive challenge. This paper investigates how to estimate and recover the end-to-end network traffic matrix in fine time granularity from the sampled traffic traces, which is a hard inverse problem. Different from previous methods, the fractal interpolation is used to reconstruct the finer-granularity network traffic. Then, the cubic spline interpolation method is used to obtain the smooth reconstruction values. To attain an accurate the end-to-end network traffic in fine time granularity, we perform a weighted-geometric-average process for two interpolation results that are obtained. The simulation results show that our approaches are feasible and effective.

  10. Spatial Interpolation of Daily Rainfall Data for Local Climate Impact Assessment over Greater Sydney Region

    Directory of Open Access Journals (Sweden)

    Xihua Yang

    2015-01-01

    Full Text Available This paper presents spatial interpolation techniques to produce finer-scale daily rainfall data from regional climate modeling. Four common interpolation techniques (ANUDEM, Spline, IDW, and Kriging were compared and assessed against station rainfall data and modeled rainfall. The performance was assessed by the mean absolute error (MAE, mean relative error (MRE, root mean squared error (RMSE, and the spatial and temporal distributions. The results indicate that Inverse Distance Weighting (IDW method is slightly better than the other three methods and it is also easy to implement in a geographic information system (GIS. The IDW method was then used to produce forty-year (1990–2009 and 2040–2059 time series rainfall data at daily, monthly, and annual time scales at a ground resolution of 100 m for the Greater Sydney Region (GSR. The downscaled daily rainfall data have been further utilized to predict rainfall erosivity and soil erosion risk and their future changes in GSR to support assessments and planning of climate change impact and adaptation in local scale.

  11. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  12. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  13. An improved transmutation method for quantitative determination of the components in multicomponent overlapping chromatograms.

    Science.gov (United States)

    Shao, Xueguang; Yu, Zhengliang; Ma, Chaoxiong

    2004-06-01

    An improved method is proposed for the quantitative determination of multicomponent overlapping chromatograms based on a known transmutation method. To overcome the main limitation of the transmutation method caused by the oscillation generated in the transmutation process, two techniques--wavelet transform smoothing and the cubic spline interpolation for reducing data points--were adopted, and a new criterion was also developed. By using the proposed algorithm, the oscillation can be suppressed effectively, and quantitative determination of the components in both the simulated and experimental overlapping chromatograms is successfully obtained.

  14. An efficient approach to numerical study of the coupled-BBM system with B-spline collocation method

    Directory of Open Access Journals (Sweden)

    khalid ali

    2016-11-01

    Full Text Available In the present paper, a numerical method is proposed for the numerical solution of a coupled-BBM system with appropriate initial and boundary conditions by using collocation method with cubic trigonometric B-spline on the uniform mesh points. The method is shown to be unconditionally stable using von-Neumann technique. To test accuracy the error norms2L, ?L are computed. Furthermore, interaction of two and three solitary waves are used to discuss the effect of the behavior of the solitary waves after the interaction. These results show that the technique introduced here is easy to apply. We make linearization for the nonlinear term.

  15. A robust method of thin plate spline and its application to DEM construction

    Science.gov (United States)

    Chen, Chuanfa; Li, Yanyan

    2012-11-01

    In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.

  16. ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES

    Directory of Open Access Journals (Sweden)

    Joel Domínguez Viveros

    2015-04-01

    Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year – season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.

  17. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  18. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  19. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    Science.gov (United States)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  20. Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Bernauer, J.C. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Diefenbach, J. [Hampton University, Hampton, VA (United States); Elbakian, G. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Gavrilov, G. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); Goerrissen, N. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Hasell, D.K.; Henderson, B.S. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Holler, Y. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Karyan, G. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Ludwig, J. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Marukyan, H. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Naryshkin, Y. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); O' Connor, C.; Russell, R.L.; Schmidt, A. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Schneekloth, U. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Suvorov, K.; Veretennikov, D. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation)

    2016-07-01

    The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.

  1. Cortical surface registration using spherical thin-plate spline with sulcal lines and mean curvature as features.

    Science.gov (United States)

    Park, Hyunjin; Park, Jun-Sung; Seong, Joon-Kyung; Na, Duk L; Lee, Jong-Min

    2012-04-30

    Analysis of cortical patterns requires accurate cortical surface registration. Many researchers map the cortical surface onto a unit sphere and perform registration of two images defined on the unit sphere. Here we have developed a novel registration framework for the cortical surface based on spherical thin-plate splines. Small-scale composition of spherical thin-plate splines was used as the geometric interpolant to avoid folding in the geometric transform. Using an automatic algorithm based on anisotropic skeletons, we extracted seven sulcal lines, which we then incorporated as landmark information. Mean curvature was chosen as an additional feature for matching between spherical maps. We employed a two-term cost function to encourage matching of both sulcal lines and the mean curvature between the spherical maps. Application of our registration framework to fifty pairwise registrations of T1-weighted MRI scans resulted in improved registration accuracy, which was computed from sulcal lines. Our registration approach was tested as an additional procedure to improve an existing surface registration algorithm. Our registration framework maintained an accurate registration over the sulcal lines while significantly increasing the cross-correlation of mean curvature between the spherical maps being registered. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. BasinVis 1.0: A MATLAB®-based program for sedimentary basin subsidence analysis and visualization

    Science.gov (United States)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2016-06-01

    Stratigraphic and structural mapping is important to understand the internal structure of sedimentary basins. Subsidence analysis provides significant insights for basin evolution. We designed a new software package to process and visualize stratigraphic setting and subsidence evolution of sedimentary basins from well data. BasinVis 1.0 is implemented in MATLAB®, a multi-paradigm numerical computing environment, and employs two numerical methods: interpolation and subsidence analysis. Five different interpolation methods (linear, natural, cubic spline, Kriging, and thin-plate spline) are provided in this program for surface modeling. The subsidence analysis consists of decompaction and backstripping techniques. BasinVis 1.0 incorporates five main processing steps; (1) setup (study area and stratigraphic units), (2) loading well data, (3) stratigraphic setting visualization, (4) subsidence parameter input, and (5) subsidence analysis and visualization. For in-depth analysis, our software provides cross-section and dip-slip fault backstripping tools. The graphical user interface guides users through the workflow and provides tools to analyze and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the results using the full range of available plot options in MATLAB. We demonstrate all functions in a case study of Miocene sediment in the central Vienna Basin.

  3. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    Science.gov (United States)

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  4. Generation of nuclear data banks through interpolation

    International Nuclear Information System (INIS)

    Castillo M, J.A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)

  5. A Combination of TsHARP and Thin Plate Spline Interpolation for Spatial Sharpening of Thermal Imagery

    Directory of Open Access Journals (Sweden)

    Xuehong Chen

    2014-03-01

    Full Text Available There have been many studies and much attention paid to spatial sharpening for thermal imagery. Among them, TsHARP, based on the good correlation between vegetation index and land surface temperature (LST, is regarded as a standard technique because of its operational simplicity and effectiveness. However, as LST is affected by other factors (e.g., soil moisture in the areas with low vegetation cover, these areas cannot be well sharpened by TsHARP. Thin plate spline (TPS is another popular downscaling technique for surface data. It has been shown to be accurate and robust for different datasets; however, it has not yet been attempted in thermal sharpening. This paper proposes to combine the TsHARP and TPS methods to enhance the advantages of each. The spatially explicit errors of these two methods were firstly estimated in theory, and then the results of TPS and TsHARP were combined with the estimation of their errors. The experiments performed across various landscapes and data showed that the proposed combined method performs more robustly and accurately than TsHARP.

  6. Spatial interpolation

    NARCIS (Netherlands)

    Stein, A.

    1991-01-01

    The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are

  7. Validating the Kinematic Wave Approach for Rapid Soil Erosion Assessment and Improved BMP Site Selection to Enhance Training Land Sustainability

    Science.gov (United States)

    2014-02-01

    installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND

  8. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    Science.gov (United States)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2017-12-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method

  9. Feature displacement interpolation

    DEFF Research Database (Denmark)

    Nielsen, Mads; Andresen, Per Rønsholt

    1998-01-01

    Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....

  10. Interpolative mapping of mean precipitation in the Baltic countries by using landscape characteristics

    Directory of Open Access Journals (Sweden)

    Kalle Remm

    2011-08-01

    Full Text Available Maps of the long-term mean precipitation involving local landscape variables were generated for the Baltic countries, and the effectiveness of seven modelling methods was compared. The precipitation data were recorded in 245 meteorological stations in 1966–2005, and 51 location-related explanatory variables were used. The similarity-based reasoning in the Constud software system outperformed other methods according to the validation fit, except for spring. The multivariate adaptive regression splines (MARS was another effective method on average. The inclusion of landscape variables, compared to reverse distance-weighted interpolation, highlights the effect of uplands, larger water bodies and forested areas. The long-term mean amount of precipitation, calculated as the station average, probably underestimates the real value for Estonia and overestimates it for Lithuania due to the uneven distribution of observation stations.

  11. Comparison of spatial interpolation methods for soil moisture and its application for monitoring drought.

    Science.gov (United States)

    Chen, Hui; Fan, Li; Wu, Wei; Liu, Hong-Bin

    2017-09-26

    Soil moisture data can reflect valuable information on soil properties, terrain features, and drought condition. The current study compared and assessed the performance of different interpolation methods for estimating soil moisture in an area with complex topography in southwest China. The approaches were inverse distance weighting, multifarious forms of kriging, regularized spline with tension, and thin plate spline. The 5-day soil moisture observed at 167 stations and daily temperature recorded at 33 stations during the period of 2010-2014 were used in the current work. Model performance was tested with accuracy indicators of determination coefficient (R 2 ), mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and modeling efficiency (ME). The results indicated that inverse distance weighting had the best performance with R 2 , MAPE, RMSE, RRMSE, and ME of 0.32, 14.37, 13.02%, 0.16, and 0.30, respectively. Based on the best method, a spatial database of soil moisture was developed and used to investigate drought condition over the study area. The results showed that the distribution of drought was characterized by evidently regional difference. Besides, drought mainly occurred in August and September in the 5 years and was prone to happening in the western and central parts rather than in the northeastern and southeastern areas.

  12. Automatic Shape Control of Triangular B-Splines of Arbitrary Topology

    Institute of Scientific and Technical Information of China (English)

    Ying He; Xian-Feng Gu; Hong Qin

    2006-01-01

    Triangular B-splines are powerful and flexible in modeling a broader class of geometric objects defined over arbitrary, non-rectangular domains. Despite their great potential and advantages in theory, practical techniques and computational tools with triangular B-splines are less-developed. This is mainly because users have to handle a large number of irregularly distributed control points over arbitrary triangulation. In this paper, an automatic and efficient method is proposed to generate visually pleasing, high-quality triangular B-splines of arbitrary topology. The experimental results on several real datasets show that triangular B-splines are powerful and effective in both theory and practice.

  13. A novel efficient coupled polynomial field interpolation scheme for higher order piezoelectric extension mode beam finite elements

    International Nuclear Information System (INIS)

    Sulbhewar, Litesh N; Raveendranath, P

    2014-01-01

    An efficient piezoelectric smart beam finite element based on Reddy’s third-order displacement field and layerwise linear potential is presented here. The present formulation is based on the coupled polynomial field interpolation of variables, unlike conventional piezoelectric beam formulations that use independent polynomials. Governing equations derived using a variational formulation are used to establish the relationship between field variables. The resulting expressions are used to formulate coupled shape functions. Starting with an assumed cubic polynomial for transverse displacement (w) and a linear polynomial for electric potential (φ), coupled polynomials for axial displacement (u) and section rotation (θ) are found. This leads to a coupled quadratic polynomial representation for axial displacement (u) and section rotation (θ). The formulation allows accommodation of extension–bending, shear–bending and electromechanical couplings at the interpolation level itself, in a variationally consistent manner. The proposed interpolation scheme is shown to eliminate the locking effects exhibited by conventional independent polynomial field interpolations and improve the convergence characteristics of HSDT based piezoelectric beam elements. Also, the present coupled formulation uses only three mechanical degrees of freedom per node, one less than the conventional formulations. Results from numerical test problems prove the accuracy and efficiency of the present formulation. (paper)

  14. Interpolation functions and the Lions-Peetre interpolation construction

    International Nuclear Information System (INIS)

    Ovchinnikov, V I

    2014-01-01

    The generalization of the Lions-Peetre interpolation method of means considered in the present survey is less general than the generalizations known since the 1970s. However, our level of generalization is sufficient to encompass spaces that are most natural from the point of view of applications, like the Lorentz spaces, Orlicz spaces, and their analogues. The spaces φ(X 0 ,X 1 ) p 0 ,p 1 considered here have three parameters: two positive numerical parameters p 0 and p 1 of equal standing, and a function parameter φ. For p 0 ≠p 1 these spaces can be regarded as analogues of Orlicz spaces under the real interpolation method. Embedding criteria are established for the family of spaces φ(X 0 ,X 1 ) p 0 ,p 1 , together with optimal interpolation theorems that refine all the known interpolation theorems for operators acting on couples of weighted spaces L p and that extend these theorems beyond scales of spaces. The main specific feature is that the function parameter φ can be an arbitrary natural functional parameter in the interpolation. Bibliography: 43 titles

  15. Contrast-guided image interpolation.

    Science.gov (United States)

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  16. Spatial interpolation of GPS PWV and meteorological variables over the west coast of Peninsular Malaysia during 2013 Klang Valley Flash Flood

    Science.gov (United States)

    Suparta, Wayan; Rahman, Rosnani

    2016-02-01

    Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high

  17. Digital time-interpolator

    International Nuclear Information System (INIS)

    Schuller, S.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report presents a description of the design of a digital time meter. This time meter should be able to measure, by means of interpolation, times of 100 ns with an accuracy of 50 ps. In order to determine the best principle for interpolation, three methods were simulated at the computer with a Pascal code. On the basis of this the best method was chosen and used in the design. In order to test the principal operation of the circuit a part of the circuit was constructed with which the interpolation could be tested. The remainder of the circuit was simulated with a computer. So there are no data available about the operation of the complete circuit in practice. The interpolation part however is the most critical part, the remainder of the circuit is more or less simple logic. Besides this report also gives a description of the principle of interpolation and the design of the circuit. The measurement results at the prototype are presented finally. (author). 3 refs.; 37 figs.; 2 tabs

  18. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  19. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    Science.gov (United States)

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  20. Radial Basis Function Based Quadrature over Smooth Surfaces

    Science.gov (United States)

    2016-03-24

    Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29

  1. Detrending of non-stationary noise data by spline techniques

    International Nuclear Information System (INIS)

    Behringer, K.

    1989-11-01

    An off-line method for detrending non-stationary noise data has been investigated. It uses a least squares spline approximation of the noise data with equally spaced breakpoints. Subtraction of the spline approximation from the noise signal at each data point gives a residual noise signal. The method acts as a high-pass filter with very sharp frequency cutoff. The cutoff frequency is determined by the breakpoint distance. The steepness of the cutoff is controlled by the spline order. (author) 12 figs., 1 tab., 5 refs

  2. PEMODELAN B-SPLINE DAN MARS PADA NILAI UJIAN MASUK TERHADAP IPK MAHASISWA JURUSAN DISAIN KOMUNIKASI VISUAL UK. PETRA SURABAYA

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2006-01-01

    Full Text Available Regression analysis is constructed for capturing the influences of independent variables to dependent ones. It can be done by looking at the relationship between those variables. This task of approximating the mean function can be done essentially in two ways. The quiet often use parametric approach is to assume that the mean curve has some prespecified functional forms. Alternatively, nonparametric approach, .i.e., without reference to a specific form, is used when there is no information of the regression function form (Haerdle, 1990. Therefore nonparametric approach has more flexibilities than the parametric one. The aim of this research is to find the best fit model that captures relationship between admission test score to the GPA. This particular data was taken from the Department of Design Communication and Visual, Petra Christian University, Surabaya for year 1999. Those two approaches were used here. In the parametric approach, we use simple linear, quadric cubic regression, and in the nonparametric ones, we use B-Spline and Multivariate Adaptive Regression Splines (MARS. Overall, the best model was chosen based on the maximum determinant coefficient. However, for MARS, the best model was chosen based on the GCV, minimum MSE, maximum determinant coefficient. Abstract in Bahasa Indonesia : Analisa regresi digunakan untuk melihat pengaruh variabel independen terhadap variabel dependent dengan terlebih dulu melihat pola hubungan variabel tersebut. Hal ini dapat dilakukan dengan melalui dua pendekatan. Pendekatan yang paling umum dan seringkali digunakan adalah pendekatan parametrik. Pendekatan parametrik mengasumsikan bentuk model sudah ditentukan. Apabila tidak ada informasi apapun tentang bentuk dari fungsi regresi, maka pendekatan yang digunakan adalah pendekatan nonparametrik. (Haerdle, 1990. Karena pendekatan tidak tergantung pada asumsi bentuk kurva tertentu, sehingga memberikan fleksibelitas yang lebih besar. Tujuan penelitian ini

  3. "Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li+-benzene

    Science.gov (United States)

    D'Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T.

    2015-08-01

    Quantum and anharmonic effects are investigated in (H2)2-Li+-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li+-benzene complex increases the ZPE of the system by 5.6 kJ mol-1 to 17.6 kJ mol-1. This ZPE is 42% of the total electronic binding energy of (H2)2-Li+-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li+-benzene is 7.7 kJ mol-1, compared to 12.4 kJ mol-1 for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li+ ion and are more confined in the θ coordinate than in H2-Li+-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li+-benzene PESs are developed. These use a modified Shepard interpolation for the Li+-benzene and H2-Li+-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li+ terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol-1. Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that the 1.5 kJ mol-1 error is

  4. "Plug-and-Play" potentials: Investigating quantum effects in (H2)2-Li(+)-benzene.

    Science.gov (United States)

    D'Arcy, Jordan H; Kolmann, Stephen J; Jordan, Meredith J T

    2015-08-21

    Quantum and anharmonic effects are investigated in (H2)2-Li(+)-benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H2 molecule to the H2-Li(+)-benzene complex increases the ZPE of the system by 5.6 kJ mol(-1) to 17.6 kJ mol(-1). This ZPE is 42% of the total electronic binding energy of (H2)2-Li(+)-benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H2 to H2-Li(+)-benzene is 7.7 kJ mol(-1), compared to 12.4 kJ mol(-1) for the first H2 molecule. Anharmonicity is found to be even more important when a second (and subsequent) H2 molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H2 molecules are found at larger distance from the Li(+) ion and are more confined in the θ coordinate than in H2-Li(+)-benzene. They also show that both H2 molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H2 molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H2)2-Li(+)-benzene PESs are developed. These use a modified Shepard interpolation for the Li(+)-benzene and H2-Li(+)-benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H2-H2 interaction. Because of the neglect of three-body H2, H2, Li(+) terms, both fragment PESs lead to overbinding of the second H2 molecule by 1.5 kJ mol(-1). Probability density histograms, however, indicate that the wavefunctions for the two H2 molecules are effectively identical on the "full" and fragment PESs. This suggests that

  5. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    Science.gov (United States)

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  6. About some properties of bivariate splines with shape parameters

    Science.gov (United States)

    Caliò, F.; Marchetti, E.

    2017-07-01

    The paper presents and proves geometrical properties of a particular bivariate function spline, built and algorithmically implemented in previous papers. The properties typical of this family of splines impact the field of computer graphics in particular that of the reverse engineering.

  7. Meshing Force of Misaligned Spline Coupling and the Influence on Rotor System

    Directory of Open Access Journals (Sweden)

    Guang Zhao

    2008-01-01

    Full Text Available Meshing force of misaligned spline coupling is derived, dynamic equation of rotor-spline coupling system is established based on finite element analysis, the influence of meshing force on rotor-spline coupling system is simulated by numerical integral method. According to the theoretical analysis, meshing force of spline coupling is related to coupling parameters, misalignment, transmitting torque, static misalignment, dynamic vibration displacement, and so on. The meshing force increases nonlinearly with increasing the spline thickness and static misalignment or decreasing alignment meshing distance (AMD. Stiffness of coupling relates to dynamic vibration displacement, and static misalignment is not a constant. Dynamic behaviors of rotor-spline coupling system reveal the following: 1X-rotating speed is the main response frequency of system when there is no misalignment; while 2X-rotating speed appears when misalignment is present. Moreover, when misalignment increases, vibration of the system gets intricate; shaft orbit departs from origin, and magnitudes of all frequencies increase. Research results can provide important criterions on both optimization design of spline coupling and trouble shooting of rotor systems.

  8. Adaptive B-spline volume representation of measured BRDF data for photorealistic rendering

    Directory of Open Access Journals (Sweden)

    Hyungjun Park

    2015-01-01

    Full Text Available Measured bidirectional reflectance distribution function (BRDF data have been used to represent complex interaction between lights and surface materials for photorealistic rendering. However, their massive size makes it hard to adopt them in practical rendering applications. In this paper, we propose an adaptive method for B-spline volume representation of measured BRDF data. It basically performs approximate B-spline volume lofting, which decomposes the problem into three sub-problems of multiple B-spline curve fitting along u-, v-, and w-parametric directions. Especially, it makes the efficient use of knots in the multiple B-spline curve fitting and thereby accomplishes adaptive knot placement along each parametric direction of a resulting B-spline volume. The proposed method is quite useful to realize efficient data reduction while smoothing out the noises and keeping the overall features of BRDF data well. By applying the B-spline volume models of real materials for rendering, we show that the B-spline volume models are effective in preserving the features of material appearance and are suitable for representing BRDF data.

  9. CMB anisotropies interpolation

    NARCIS (Netherlands)

    Zinger, S.; Delabrouille, Jacques; Roux, Michel; Maitre, Henri

    2010-01-01

    We consider the problem of the interpolation of irregularly spaced spatial data, applied to observation of Cosmic Microwave Background (CMB) anisotropies. The well-known interpolation methods and kriging are compared to the binning method which serves as a reference approach. We analyse kriging

  10. High-order accurate numerical algorithm for three-dimensional transport prediction

    Energy Technology Data Exchange (ETDEWEB)

    Pepper, D W [Savannah River Lab., Aiken, SC; Baker, A J

    1980-01-01

    The numerical solution of the three-dimensional pollutant transport equation is obtained with the method of fractional steps; advection is solved by the method of moments and diffusion by cubic splines. Topography and variable mesh spacing are accounted for with coordinate transformations. First estimate wind fields are obtained by interpolation to grid points surrounding specific data locations. Numerical results agree with results obtained from analytical Gaussian plume relations for ideal conditions. The numerical model is used to simulate the transport of tritium released from the Savannah River Plant on 2 May 1974. Predicted ground level air concentration 56 km from the release point is within 38% of the experimentally measured value.

  11. The Hill Chart Calculation for Pelton Runner Models using the HydroHillChart - Pelton Module Software

    Directory of Open Access Journals (Sweden)

    Adelina Bostan

    2015-07-01

    Full Text Available The Pelton turbines industrial design is based on the hill chart characteristics obtained by measuring the models. Primary data measurements used to obtain the hill chart can be processed graphically, by hand or by using graphic programs respectively CAD programs; the HydroHillChart - Pelton module software is a specialized tool in achieving the hill chart, using interpolation cubic spline functions. Thereby, based on measurements of several models of Pelton turbines, a computerized library, used to design industrial Pelton turbines can be created. The paper presents the universal characteristics calculated by using the HydroHillChart - Pelton module software for a series of Pelton runners.

  12. Numerical investigation on exterior conformal mappings with application to airfoils

    International Nuclear Information System (INIS)

    Mohamad Rashidi Md Razali; Hu Laey Nee

    2000-01-01

    A numerical method is described in computing a conformal map from an exterior region onto the exterior of the unit disk. The numerical method is based on a boundary integral equation which is similar to the Kerzman-Stein integral equation for interior mapping. Some examples show that numerical results of high accuracy can be obtained provided that the boundaries are smooth. This numerical method has been applied to the mapping airfoils. However, due to the fact that the parametric representation of an air foil is not known, a cubic spline interpolation method has been used. Some numerical examples with satisfying results have been obtained for the symmetrical and cambered airfoils. (Author)

  13. SPLPKG WFCMPR WFAPPX, Wilson-Fowler Spline Generator for Computer Aided Design And Manufacturing (CAD/CAM) Systems

    International Nuclear Information System (INIS)

    Fletcher, S.K.

    2002-01-01

    1 - Description of program or function: The three programs SPLPKG, WFCMPR, and WFAPPX provide the capability for interactively generating, comparing and approximating Wilson-Fowler Splines. The Wilson-Fowler spline is widely used in Computer Aided Design and Manufacturing (CAD/CAM) systems. It is favored for many applications because it produces a smooth, low curvature fit to planar data points. Program SPLPKG generates a Wilson-Fowler spline passing through given nodes (with given end conditions) and also generates a piecewise linear approximation to that spline within a user-defined tolerance. The program may be used to generate a 'desired' spline against which to compare other Splines generated by CAD/CAM systems. It may also be used to generate an acceptable approximation to a desired spline in the event that an acceptable spline cannot be generated by the receiving CAD/CAM system. SPLPKG writes an IGES file of points evaluated on the spline and/or a file containing the spline description. Program WFCMPR computes the maximum difference between two Wilson-Fowler Splines and may be used to verify the spline recomputed by a receiving system. It compares two Wilson-Fowler Splines with common nodes and reports the maximum distance between curves (measured perpendicular to segments) and the maximum difference of their tangents (or normals), both computed along the entire length of the Splines. Program WFAPPX computes the maximum difference between a Wilson- Fowler spline and a piecewise linear curve. It may be used to accept or reject a proposed approximation to a desired Wilson-Fowler spline, even if the origin of the approximation is unknown. The maximum deviation between these two curves, and the parameter value on the spline where it occurs are reported. 2 - Restrictions on the complexity of the problem - Maxima of: 1600 evaluation points (SPLPKG), 1000 evaluation points (WFAPPX), 1000 linear curve breakpoints (WFAPPX), 100 spline Nodes

  14. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    Science.gov (United States)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  15. Exponential B-splines and the partition of unity property

    DEFF Research Database (Denmark)

    Christensen, Ole; Massopust, Peter

    2012-01-01

    We provide an explicit formula for a large class of exponential B-splines. Also, we characterize the cases where the integer-translates of an exponential B-spline form a partition of unity up to a multiplicative constant. As an application of this result we construct explicitly given pairs of dual...

  16. The research on NURBS adaptive interpolation technology

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng

    2017-04-01

    In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  17. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  18. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  19. Bayesian Analysis for Penalized Spline Regression Using WinBUGS

    Directory of Open Access Journals (Sweden)

    Ciprian M. Crainiceanu

    2005-09-01

    Full Text Available Penalized splines can be viewed as BLUPs in a mixed model framework, which allows the use of mixed model software for smoothing. Thus, software originally developed for Bayesian analysis of mixed models can be used for penalized spline regression. Bayesian inference for nonparametric models enjoys the flexibility of nonparametric models and the exact inference provided by the Bayesian inferential machinery. This paper provides a simple, yet comprehensive, set of programs for the implementation of nonparametric Bayesian analysis in WinBUGS. Good mixing properties of the MCMC chains are obtained by using low-rank thin-plate splines, while simulation times per iteration are reduced employing WinBUGS specific computational tricks.

  20. P-Splines Using Derivative Information

    KAUST Repository

    Calderon, Christopher P.; Martinez, Josue G.; Carroll, Raymond J.; Sorensen, Danny C.

    2010-01-01

    in quantitatively summarizing such data. In this work, functions estimated using P-splines are associated with stochastic differential equations (SDEs). It is shown how quantities estimated in a single SDE summarize fast-scale phenomena, whereas variation between

  1. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; Keller, J.; Wallen, R.; Errichello, R.; Halse, C.; Lambert, S.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  2. Statistical analysis of sediment toxicity by additive monotone regression splines

    NARCIS (Netherlands)

    Boer, de W.J.; Besten, den P.J.; Braak, ter C.J.F.

    2002-01-01

    Modeling nonlinearity and thresholds in dose-effect relations is a major challenge, particularly in noisy data sets. Here we show the utility of nonlinear regression with additive monotone regression splines. These splines lead almost automatically to the estimation of thresholds. We applied this

  3. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  4. Splines and their reciprocal-bases in volume-integral equations

    International Nuclear Information System (INIS)

    Sabbagh, H.A.

    1993-01-01

    The authors briefly outline the use of higher-order splines and their reciprocal-bases in discretizing the volume-integral equations of electromagnetics. The discretization is carried out by means of the method of moments, in which the expansion functions are the higher-order splines, and the testing functions are the corresponding reciprocal-basis functions. These functions satisfy an orthogonality condition with respect to the spline expansion functions. Thus, the method is not Galerkin, but the structure of the resulting equations is quite regular, nevertheless. The theory is applied to the volume-integral equations for the unknown current density, or unknown electric field, within a scattering body, and to the equations for eddy-current nondestructive evaluation. Numerical techniques for computing the matrix elements are also given

  5. Slow Motion and Zoom in HD Digital Videos Using Fractals

    Directory of Open Access Journals (Sweden)

    Maurizio Murroni

    2009-01-01

    Full Text Available Slow motion replay and spatial zooming are special effects used in digital video rendering. At present, most techniques to perform digital spatial zoom and slow motion are based on interpolation for both enlarging the size of the original pictures and generating additional intermediate frames. Mainly, interpolation is done either by linear or cubic spline functions or by motion estimation/compensation which both can be applied pixel by pixel, or by partitioning frames into blocks. Purpose of this paper is to present an alternative technique combining fractals theory and wavelet decomposition to achieve spatial zoom and slow motion replay of HD digital color video sequences. Fast scene change detection, active scene detection, wavelet subband analysis, and color fractal coding based on Earth Mover's Distance (EMD measure are used to reduce computational load and to improve visual quality. Experiments show that the proposed scheme achieves better results in terms of overall visual quality compared to the state-of-the-art techniques.

  6. Implementation and analysis of trajectory schemes for informate: a serial link robot manipulator

    International Nuclear Information System (INIS)

    Rauf, A.; Ahmed, S.M.; Asif, M.; Ahmad, M.

    1997-01-01

    Trajectory planning schemes generally interpolate or approximate the desired path by a class of polynomial functions and generate a sequence of time based control set points for the control of the manipulator movement from certain initial configuration to final configuration. Schemes for trajectory generation can be implemented in Joint space and in Cartesian space. This paper describes Joint Space trajectory schemes and Cartesian Space trajectory schemes and their implementation for Infomate, a six degrees of freedom serial link robot manipulator. LSPBs and cubic Spline are chosen as interpolating functions of time for each type of schemes. Modules developed have been incorporated in an OLP system for Infomate. Trajectory planning Schemes discussed in this paper incorporate the constraints of velocities and accelerations of the actuators. comparison with respect to computation and motion time is presented for above mentioned trajectory schemes. Algorithms have been developed that enable the end effector to follow a straight line; other paths like circle, ellipse, etc. can be approximated by straight line segments. (author)

  7. A comparison of spatial interpolation methods for soil temperature over a complex topographical region

    Science.gov (United States)

    Wu, Wei; Tang, Xiao-Ping; Ma, Xue-Qing; Liu, Hong-Bin

    2016-08-01

    Soil temperature variability data provide valuable information on understanding land-surface ecosystem processes and climate change. This study developed and analyzed a spatial dataset of monthly mean soil temperature at a depth of 10 cm over a complex topographical region in southwestern China. The records were measured at 83 stations during the period of 1961-2000. Nine approaches were compared for interpolating soil temperature. The accuracy indicators were root mean square error (RMSE), modelling efficiency (ME), and coefficient of residual mass (CRM). The results indicated that thin plate spline with latitude, longitude, and elevation gave the best performance with RMSE varying between 0.425 and 0.592 °C, ME between 0.895 and 0.947, and CRM between -0.007 and 0.001. A spatial database was developed based on the best model. The dataset showed that larger seasonal changes of soil temperature were from autumn to winter over the region. The northern and eastern areas with hilly and low-middle mountains experienced larger seasonal changes.

  8. A New Method to Solve Numeric Solution of Nonlinear Dynamic System

    Directory of Open Access Journals (Sweden)

    Min Hu

    2016-01-01

    Full Text Available It is well known that the cubic spline function has advantages of simple forms, good convergence, approximation, and second-order smoothness. A particular class of cubic spline function is constructed and an effective method to solve the numerical solution of nonlinear dynamic system is proposed based on the cubic spline function. Compared with existing methods, this method not only has high approximation precision, but also avoids the Runge phenomenon. The error analysis of several methods is given via two numeric examples, which turned out that the proposed method is a much more feasible tool applied to the engineering practice.

  9. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  10. Systems of Inhomogeneous Linear Equations

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.

  11. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  12. Image Interpolation with Contour Stencils

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    Image interpolation is the problem of increasing the resolution of an image. Linear methods must compromise between artifacts like jagged edges, blurring, and overshoot (halo) artifacts. More recent works consider nonlinear methods to improve interpolation of edges and textures. In this paper we apply contour stencils for estimating the image contours based on total variation along curves and then use this estimation to construct a fast edge-adaptive interpolation.

  13. Cubical local partial orders on cubically subdivided spaces - existence and construction

    DEFF Research Database (Denmark)

    Fajstrup, Lisbeth

    The geometric models of Higher Dimensional Automata and Dijkstra's PV-model are cubically subdivided topological spaces with a local partial order. If a cubicalization of a topological space is free of immersed cubic Möbius bands, then there are consistent choices of direction in all cubes, such ...... that the underlying geometry of an HDA may be quite complicated....

  14. Cubical local partial orders on cubically subdivided spaces - Existence and construction

    DEFF Research Database (Denmark)

    Fajstrup, Lisbeth

    2006-01-01

    The geometric models of higher dimensional automata (HDA) and Dijkstra's PV-model are cubically subdivided topological spaces with a local partial order. If a cubicalization of a topological space is free of immersed cubic Möbius bands, then there are consistent choices of direction in all cubes...... that the underlying geometry of an HDA may be quite complicated....

  15. Revisiting Veerman’s interpolation method

    DEFF Research Database (Denmark)

    Christiansen, Peter; Bay, Niels Oluf

    2016-01-01

    and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable.......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation...

  16. A Unified Pricing of Variable Annuity Guarantees under the Optimal Stochastic Control Framework

    Directory of Open Access Journals (Sweden)

    Pavel V. Shevchenko

    2016-07-01

    Full Text Available In this paper, we review pricing of the variable annuity living and death guarantees offered to retail investors in many countries. Investors purchase these products to take advantage of market growth and protect savings. We present pricing of these products via an optimal stochastic control framework and review the existing numerical methods. We also discuss pricing under the complete/incomplete financial market models, stochastic mortality and optimal/sub-optimal policyholder behavior, and in the presence of taxes. For numerical valuation of these contracts in the case of simple risky asset process, we develop a direct integration method based on the Gauss-Hermite quadratures with a one-dimensional cubic spline for calculation of the expected contract value, and a bi-cubic spline interpolation for applying the jump conditions across the contract cashflow event times. This method is easier to implement and faster when compared to the partial differential equation methods if the transition density (or its moments of the risky asset underlying the contract is known in closed form between the event times. We present accurate numerical results for pricing of a Guaranteed Minimum Accumulation Benefit (GMAB guarantee available on the market that can serve as a numerical benchmark for practitioners and researchers developing pricing of variable annuity guarantees to assess the accuracy of their numerical implementation.

  17. Interferometric interpolation of sparse marine data

    KAUST Repository

    Hanafy, Sherif M.

    2013-10-11

    We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.

  18. Final report on Production Test No. 105-245-P -- Effectiveness of cadmium coated splines

    Energy Technology Data Exchange (ETDEWEB)

    Carson, A.B.

    1949-05-19

    This report discussed cadmium coated splines which have been developed to supplement the regular control rod systems under emergency shutdown conditions from higher power levels. The objective of this test was to determine the effectiveness of one such spline placed in a tube in the central zone of a pile, and of two splines in the same tube. In addition, the process control group of the P Division asked that probable spline requirements for safe operation at various power levels be estimated, and the details included in this report. The results of the test indicated a reactivity value of 10.5 {plus_minus} 1.0 ih for a single spline, and 19.0 ih {plus_minus} 1.0 ihfor two splines in tube 1674-B under the loading conditions of 4-27-49, the date of the test. The temperature rise of the cooling water for this tube under these conditions was found to be 37.2{degrees}C for 275 MW operation.

  19. Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images

    Energy Technology Data Exchange (ETDEWEB)

    Javaid, Zarrar; Unsworth, Charles P., E-mail: c.unsworth@auckland.ac.nz [Department of Engineering Science, The University of Auckland, Auckland 1010 (New Zealand); Boocock, Mark G.; McNair, Peter J. [Health and Rehabilitation Research Center, Auckland University of Technology, Auckland 1142 (New Zealand)

    2016-03-15

    Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhances volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume

  20. Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images

    International Nuclear Information System (INIS)

    Javaid, Zarrar; Unsworth, Charles P.; Boocock, Mark G.; McNair, Peter J.

    2016-01-01

    Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhances volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume

  1. Edge-detect interpolation for direct digital periapical images

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows ; 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.

  2. A systematic comparison of motion artifact correction techniques for functional near-infrared spectroscopy

    DEFF Research Database (Denmark)

    Cooper, Robert J; Selb, Juliette; Gagnon, Louis

    2012-01-01

    a significant reduction in the mean-squared error (MSE) and significant increase in the contrast-to-noise ratio (CNR) of the recovered HRF when compared to no correction and compared to a process of rejecting motion-contaminated trials. Spline interpolation produces the largest average reduction in MSE (55....... Principle component analysis, spline interpolation, wavelet analysis, and Kalman filtering approaches are compared to one another and to standard approaches using the accuracy of the recovered, simulated hemodynamic response function (HRF). Each of the four motion correction techniques we tested yields......%) while wavelet analysis produces the highest average increase in CNR (39%). On the basis of this analysis, we recommend the routine application of motion correction techniques (particularly spline interpolation or wavelet analysis) to minimize the impact of motion artifacts on functional NIRS data....

  3. B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Shilpa Dilipkumar

    2015-03-01

    Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.

  4. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Errichello, Robert [GEARTECH, Houston, TX (United States); Halse, Chris [Romax Technology, Nottingham (United Kingdom)

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  5. Intensity Conserving Spectral Fitting

    Science.gov (United States)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2015-01-01

    The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.

  6. Generalized interpolative quantum statistics

    International Nuclear Information System (INIS)

    Ramanathan, R.

    1992-01-01

    A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently

  7. Acoustic Emission Signatures of Fatigue Damage in Idealized Bevel Gear Spline for Localized Sensing

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2017-06-01

    Full Text Available In many rotating machinery applications, such as helicopters, the splines of an externally-splined steel shaft that emerges from the gearbox engage with the reverse geometry of an internally splined driven shaft for the delivery of power. The splined section of the shaft is a critical and non-redundant element which is prone to cracking due to complex loading conditions. Thus, early detection of flaws is required to prevent catastrophic failures. The acoustic emission (AE method is a direct way of detecting such active flaws, but its application to detect flaws in a splined shaft in a gearbox is difficult due to the interference of background noise and uncertainty about the effects of the wave propagation path on the received AE signature. Here, to model how AE may detect fault propagation in a hollow cylindrical splined shaft, the splined section is essentially unrolled into a metal plate of the same thickness as the cylinder wall. Spline ridges are cut into this plate, a through-notch is cut perpendicular to the spline to model fatigue crack initiation, and tensile cyclic loading is applied parallel to the spline to propagate the crack. In this paper, the new piezoelectric sensor array is introduced with the purpose of placing them within the gearbox to minimize the wave propagation path. The fatigue crack growth of a notched and flattened gearbox spline component is monitored using a new piezoelectric sensor array and conventional sensors in a laboratory environment with the purpose of developing source models and testing the new sensor performance. The AE data is continuously collected together with strain gauges strategically positioned on the structure. A significant amount of continuous emission due to the plastic deformation accompanied with the crack growth is observed. The frequency spectra of continuous emissions and burst emissions are compared to understand the differences of plastic deformation and sudden crack jump. The

  8. Integration by cell algorithm for Slater integrals in a spline basis

    International Nuclear Information System (INIS)

    Qiu, Y.; Fischer, C.F.

    1999-01-01

    An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems

  9. Numerical Solutions for Convection-Diffusion Equation through Non-Polynomial Spline

    Directory of Open Access Journals (Sweden)

    Ravi Kanth A.S.V.

    2016-01-01

    Full Text Available In this paper, numerical solutions for convection-diffusion equation via non-polynomial splines are studied. We purpose an implicit method based on non-polynomial spline functions for solving the convection-diffusion equation. The method is proven to be unconditionally stable by using Von Neumann technique. Numerical results are illustrated to demonstrate the efficiency and stability of the purposed method.

  10. Evaluation of various interpolants available in DICE

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.

  11. Multivariate Birkhoff interpolation

    CERN Document Server

    Lorentz, Rudolph A

    1992-01-01

    The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...

  12. Tomographic reconstruction with B-splines surfaces

    International Nuclear Information System (INIS)

    Oliveira, Eric F.; Dantas, Carlos C.; Melo, Silvio B.; Mota, Icaro V.; Lira, Mailson

    2011-01-01

    Algebraic reconstruction techniques when applied to a limited number of data usually suffer from noise caused by the process of correction or by inconsistencies in the data coming from the stochastic process of radioactive emission and oscillation equipment. The post - processing of the reconstructed image with the application of filters can be done to mitigate the presence of noise. In general these processes also attenuate the discontinuities present in edges that distinguish objects or artifacts, causing excessive blurring in the reconstructed image. This paper proposes a built-in noise reduction at the same time that it ensures adequate smoothness level in the reconstructed surface, representing the unknowns as linear combinations of elements of a piecewise polynomial basis, i.e. a B-splines basis. For that, the algebraic technique ART is modified to accommodate the first, second and third degree bases, ensuring C 0 , C 1 and C 2 smoothness levels, respectively. For comparisons, three methodologies are applied: ART, ART post-processed with regular B-splines filters (ART*) and the proposed method with the built-in B-splines filter (BsART). Simulations with input data produced from common mathematical phantoms were conducted. For the phantoms used the BsART method consistently presented the smallest errors, among the three methods. This study has shown the superiority of the change made to embed the filter in the ART when compared to the post-filtered ART. (author)

  13. Interpolation theory

    CERN Document Server

    Lunardi, Alessandra

    2018-01-01

    This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.

  14. Time-interpolator

    International Nuclear Information System (INIS)

    Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs

  15. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

    International Nuclear Information System (INIS)

    Müller, Kerstin; Schwemmer, Chris; Hornegger, Joachim; Zheng Yefeng; Wang Yang; Lauritsch, Günter; Rohkohl, Christopher; Maier, Andreas K.; Schultz, Carl; Fahrig, Rebecca

    2013-01-01

    Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all

  16. Measurement of Intervertebral Cervical Motion by Means of Dynamic X-Ray Image Processing and Data Interpolation

    Directory of Open Access Journals (Sweden)

    Paolo Bifulco

    2013-01-01

    Full Text Available Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient’s spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient’s fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements.

  17. Measurement of intervertebral cervical motion by means of dynamic x-ray image processing and data interpolation.

    Science.gov (United States)

    Bifulco, Paolo; Cesarelli, Mario; Romano, Maria; Fratini, Antonio; Sansone, Mario

    2013-01-01

    Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements.

  18. A disposition of interpolation techniques

    NARCIS (Netherlands)

    Knotters, M.; Heuvelink, G.B.M.

    2010-01-01

    A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated

  19. Adaptive estimation of multivariate functions using conditionally Gaussian tensor-product spline priors

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2012-01-01

    We investigate posterior contraction rates for priors on multivariate functions that are constructed using tensor-product B-spline expansions. We prove that using a hierarchical prior with an appropriate prior distribution on the partition size and Gaussian prior weights on the B-spline

  20. Node insertion in Coalescence Fractal Interpolation Function

    International Nuclear Information System (INIS)

    Prasad, Srijanani Anurag

    2013-01-01

    The Iterated Function System (IFS) used in the construction of Coalescence Hidden-variable Fractal Interpolation Function (CHFIF) depends on the interpolation data. The insertion of a new point in a given set of interpolation data is called the problem of node insertion. In this paper, the effect of insertion of new point on the related IFS and the Coalescence Fractal Interpolation Function is studied. Smoothness and Fractal Dimension of a CHFIF obtained with a node are also discussed

  1. On convexity and Schoenberg's variation diminishing splines

    International Nuclear Information System (INIS)

    Feng, Yuyu; Kozak, J.

    1992-11-01

    In the paper we characterize a convex function by the monotonicity of a particular variation diminishing spline sequence. The result extends the property known for the Bernstein polynomial sequence. (author). 4 refs

  2. Germanium thermometers in the temperature range .1000K to 4.20K

    International Nuclear Information System (INIS)

    Hsieh, S.Y.; Sanchez, D.H.

    1974-01-01

    The sensitivity characteristics of two germanium thermometers that proved to be convenient sensors in the temperature range from .100 0 K to 4.2 0 K, are described. Their resistances change from about 8 x 10 5 ohms at .100 0 K to about 100 ohms at 4.2 0 K. The calibration curves were fitted to natural spline functions of order 3 in the whole range of temperatures. These functions give less than half millidegree standard dispersion against 15 millidegree standard dispersion when usual polynomial interpolations are used. It is discussed what spline functions are, and compare the goodness of spline interpolation with polynomial methods [pt

  3. IMPEDANCE METHOD OF MEASURING OF THE TITRATABLE ACIDITY OF YOGURT

    Directory of Open Access Journals (Sweden)

    Miroslav Vasilev

    2016-10-01

    Full Text Available In the present work are analyzed studies related to changes in the active impedance component of the dairy environment caused by the flow of lactic fermentation and coagulation of casein in milk. The aim of this work was to determine the relationship between the relative change of titratable acidity and the relative change of active impedance component of the dairy environment with lactic fermentation, causing coagulation of the casein in milk. . The data were interpolated with cubic spline, visualizing how when the fat content increases, the electrical resistance increases too. All data, collected during the tests would complement and be used for solving the optimization problem to determine the time of completion of the coagulation in future work.

  4. Moving magnets in a micromagnetic finite-difference framework

    Science.gov (United States)

    Rissanen, Ilari; Laurson, Lasse

    2018-05-01

    We present a method and an implementation for smooth linear motion in a finite-difference-based micromagnetic simulation code, to be used in simulating magnetic friction and other phenomena involving moving microscale magnets. Our aim is to accurately simulate the magnetization dynamics and relative motion of magnets while retaining high computational speed. To this end, we combine techniques for fast scalar potential calculation and cubic b-spline interpolation, parallelizing them on a graphics processing unit (GPU). The implementation also includes the possibility of explicitly simulating eddy currents in the case of conducting magnets. We test our implementation by providing numerical examples of stick-slip motion of thin films pulled by a spring and the effect of eddy currents on the switching time of magnetic nanocubes.

  5. COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    G. Garnero

    2014-01-01

    In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase.

  6. Research progress and hotspot analysis of spatial interpolation

    Science.gov (United States)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  7. Scripted Bodies and Spline Driven Animation

    DEFF Research Database (Denmark)

    Erleben, Kenny; Henriksen, Knud

    2002-01-01

    In this paper we will take a close look at the details and technicalities in applying spline driven animation to scripted bodies in the context of dynamic simulation. The main contributions presented in this paper are methods for computing velocities and accelerations in the time domain...

  8. Distance-two interpolation for parallel algebraic multigrid

    International Nuclear Information System (INIS)

    Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

    2007-01-01

    In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

  9. Numerical Feynman integrals with physically inspired interpolation: Faster convergence and significant reduction of computational cost

    Directory of Open Access Journals (Sweden)

    Nikesh S. Dattani

    2012-03-01

    Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.

  10. “Plug-and-Play” potentials: Investigating quantum effects in (H{sub 2}){sub 2}–Li{sup +}–benzene

    Energy Technology Data Exchange (ETDEWEB)

    D’Arcy, Jordan H.; Kolmann, Stephen J.; Jordan, Meredith J. T. [School of Chemistry, The University of Sydney, Sydney (Australia)

    2015-08-21

    Quantum and anharmonic effects are investigated in (H{sub 2}){sub 2}–Li{sup +}–benzene, a model for hydrogen adsorption in metal-organic frameworks and carbon-based materials, using rigid-body diffusion Monte Carlo (RBDMC) simulations. The potential-energy surface (PES) is calculated as a modified Shepard interpolation of M05-2X/6-311+G(2df,p) electronic structure data. The RBDMC simulations yield zero-point energies (ZPE) and probability density histograms that describe the ground-state nuclear wavefunction. Binding a second H{sub 2} molecule to the H{sub 2}–Li{sup +}–benzene complex increases the ZPE of the system by 5.6 kJ mol{sup −1} to 17.6 kJ mol{sup −1}. This ZPE is 42% of the total electronic binding energy of (H{sub 2}){sub 2}–Li{sup +}–benzene and cannot be neglected. Our best estimate of the 0 K binding enthalpy of the second H{sub 2} to H{sub 2}–Li{sup +}–benzene is 7.7 kJ mol{sup −1}, compared to 12.4 kJ mol{sup −1} for the first H{sub 2} molecule. Anharmonicity is found to be even more important when a second (and subsequent) H{sub 2} molecule is adsorbed; use of harmonic ZPEs results in significant error in the 0 K binding enthalpy. Probability density histograms reveal that the two H{sub 2} molecules are found at larger distance from the Li{sup +} ion and are more confined in the θ coordinate than in H{sub 2}–Li{sup +}–benzene. They also show that both H{sub 2} molecules are delocalized in the azimuthal coordinate, ϕ. That is, adding a second H{sub 2} molecule is insufficient to localize the wavefunction in ϕ. Two fragment-based (H{sub 2}){sub 2}–Li{sup +}–benzene PESs are developed. These use a modified Shepard interpolation for the Li{sup +}–benzene and H{sub 2}–Li{sup +}–benzene fragments, and either modified Shepard interpolation or a cubic spline to model the H{sub 2}–H{sub 2} interaction. Because of the neglect of three-body H{sub 2}, H{sub 2}, Li{sup +} terms, both fragment PESs lead to overbinding

  11. Development of quadrilateral spline thin plate elements using the B-net method

    Science.gov (United States)

    Chen, Juan; Li, Chong-Jun

    2013-08-01

    The quadrilateral discrete Kirchhoff thin plate bending element DKQ is based on the isoparametric element Q8, however, the accuracy of the isoparametric quadrilateral elements will drop significantly due to mesh distortions. In a previouswork, we constructed an 8-node quadrilateral spline element L8 using the triangular area coordinates and the B-net method, which can be insensitive to mesh distortions and possess the second order completeness in the Cartesian coordinates. In this paper, a thin plate spline element is developed based on the spline element L8 and the refined technique. Numerical examples show that the present element indeed possesses higher accuracy than the DKQ element for distorted meshes.

  12. Interpolative Boolean Networks

    Directory of Open Access Journals (Sweden)

    Vladimir Dobrić

    2017-01-01

    Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.

  13. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI

    Science.gov (United States)

    Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.

    2015-01-01

    Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085

  14. Mechanized evaluation of neutron cross-sections

    International Nuclear Information System (INIS)

    Horsley, A.; Parker, J.B.

    1967-01-01

    The evaluation work to provide accurate and consistent neutron cross-section data for multigroup neutronics calculations is not fully exploiting the available theoretical and experimental results; this has been so particularly since the introduction of on-line data handling techniques enabled experimenters to turn out vast quantities of numbers. This situation can be radically improved only by mechanizing the evaluation processes. Systems such as the SC1SRS tape will not only largely overcome the task of collecting data but will provide speedy access to it; by using computers and graph-plotting machines to tabulate and display this data, the labour of evaluation can be very greatly reduced. With some types of cross-section there is hope that by using modern curve-fitting techniques the actual evaluation and statistical accounting of the data can be performed automatically. Some areas where automatic evaluation would seem likely to succeed are specified and a discussion of the mathematical difficulties incurred, such as the elimination of anomalous data, is given. Particularly promising is the use of splines in the mechanized evaluation of data. Splines are the mathematical analogues of the draughtsman's spline used in drawing smooth curves. Their principal properties are the excellent approximations they give to the derivatives of a function; in contrast to conventional polynomial fitting, this feature ensures good interpolation and, when required, stable extrapolation. Various methods of using splines in data graduation and the problem of marrying these methods to standard statistical procedures are examined. The results of work done at AWRE with cubic splines on the mechanized evaluation of neutron scattering total cross-section and angular distribution data are presented. (author)

  15. Comparison of the THYC and FLICA-3M codes by the pseudo-cubic thin-plate method; Comparaison par la methode des plaques de predicteurs de flux critique obtenus a l`aide des codes THYC et FLICA-3

    Energy Technology Data Exchange (ETDEWEB)

    Banner, D; Crecy, F de

    1993-06-01

    The pseudo cubic Spline method (PCSM) is a statistical tool developed by the CEA. It is designed to analyse experimental points and in particular thermalhydraulic data. Predictors of the occurrence of critical heat flux are obtained by using Spline functions. In this paper, predictors have been computed from the same CHF databases by using two different flow analyses to derive local thermal-hydraulic variables at the CHF location. In fact, CEA`s FLICA-3M represents rod bundles by interconnected subchannels whereas EDF`s THYC code uses a porous 3D approach. In a first step, the PCSM is briefly presented as well as the two codes studied here. Then, the comparison methodology is explained in order to prove that advanced analysis of thermalhydraulic codes can be achieved with the PCSM. (authors). 6 figs., 2 tabs., 5 refs.

  16. Error and symmetry analysis of Misner's algorithm for spherical harmonic decomposition on a cubic grid

    International Nuclear Information System (INIS)

    Fiske, David R

    2006-01-01

    Computing spherical harmonic decompositions is a ubiquitous technique that arises in a wide variety of disciplines and a large number of scientific codes. Because spherical harmonics are defined by integrals over spheres, however, one must perform some sort of interpolation in order to compute them when data are stored on a cubic lattice. Misner (2004 Class. Quantum Grav. 21 S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid, which has been found in real applications to be both efficient and robust to the presence of mesh refinement boundaries. At the same time, however, practical applications of the algorithm require knowledge of how the truncation errors of the algorithm depend on the various parameters in the algorithm. Based on analytic arguments and experience using the algorithm in real numerical simulations, I explore these dependences and provide a rule of thumb for choosing the parameters based on the truncation errors of the underlying data. I also demonstrate that symmetries in the spherical harmonics themselves allow for an even more efficient implementation of the algorithm than was suggested by Misner in his original paper

  17. An Improved Rotary Interpolation Based on FPGA

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2014-08-01

    Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  18. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim [Pattern Recognition Lab, Department of Computer Science, Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen 91058 (Germany); Zheng Yefeng; Wang Yang [Imaging and Computer Vision, Siemens Corporate Research, Princeton, New Jersey 08540 (United States); Lauritsch, Guenter; Rohkohl, Christopher; Maier, Andreas K. [Siemens AG, Healthcare Sector, Forchheim 91301 (Germany); Schultz, Carl [Thoraxcenter, Erasmus MC, Rotterdam 3000 (Netherlands); Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States)

    2013-03-15

    Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of

  19. Thermal Simulation of the Fresh Food Compartment in a Domestic Refrigerator

    Directory of Open Access Journals (Sweden)

    Juan M. Belman-Flores

    2017-01-01

    Full Text Available In the field of domestic refrigeration, it is important to look for methods that can be used to simulate, and, thus, improve the thermal behavior of the fresh food compartment. In this sense, this study proposes some methods to model the thermal behavior of this compartment when the shelves’ positions are changed. Temperature measurements at specific locations in this compartment were obtained. Several shelf position combinations were performed to use three 2D interpolation methods in order to simulate the temperature mean and the temperature variance. The methods used were: Lagrange’s interpolation, cubic spline interpolation and bilinear interpolation. Two validation points were chosen to verify the proposed methods. By comparing the experimental results with the computer simulations, it was possible to conclude that the method of Lagrange’s interpolation provided values that were not close to the real measured values. On the other hand, it was observed that the method of bilinear interpolation offered the best results, estimating values which were very close to the actual experimental measurements. These interpolation methods were used to build color thermal graphs that can be used to find some of the most appropriate shelf position combinations in this type of refrigerator. By inspection of these thermal graphs, it can be seen that the lowest average temperature was obtained when one shelf was located at 24.5 cm while the second shelf was located at 29.5 cm measured from the top of the compartment. In the same way, it can be seen that the minimum temperature variance was obtained when only one shelf was inside the compartment and this shelf was located at 29.5 cm.

  20. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  1. An isogeometric boundary element method for electromagnetic scattering with compatible B-spline discretizations

    Science.gov (United States)

    Simpson, R. N.; Liu, Z.; Vázquez, R.; Evans, J. A.

    2018-06-01

    We outline the construction of compatible B-splines on 3D surfaces that satisfy the continuity requirements for electromagnetic scattering analysis with the boundary element method (method of moments). Our approach makes use of Non-Uniform Rational B-splines to represent model geometry and compatible B-splines to approximate the surface current, and adopts the isogeometric concept in which the basis for analysis is taken directly from CAD (geometry) data. The approach allows for high-order approximations and crucially provides a direct link with CAD data structures that allows for efficient design workflows. After outlining the construction of div- and curl-conforming B-splines defined over 3D surfaces we describe their use with the electric and magnetic field integral equations using a Galerkin formulation. We use Bézier extraction to accelerate the computation of NURBS and B-spline terms and employ H-matrices to provide accelerated computations and memory reduction for the dense matrices that result from the boundary integral discretization. The method is verified using the well known Mie scattering problem posed over a perfectly electrically conducting sphere and the classic NASA almond problem. Finally, we demonstrate the ability of the approach to handle models with complex geometry directly from CAD without mesh generation.

  2. Convergence of trajectories in fractal interpolation of stochastic processes

    International Nuclear Information System (INIS)

    MaIysz, Robert

    2006-01-01

    The notion of fractal interpolation functions (FIFs) can be applied to stochastic processes. Such construction is especially useful for the class of α-self-similar processes with stationary increments and for the class of α-fractional Brownian motions. For these classes, convergence of the Minkowski dimension of the graphs in fractal interpolation of the Hausdorff dimension of the graph of original process was studied in [Herburt I, MaIysz R. On convergence of box dimensions of fractal interpolation stochastic processes. Demonstratio Math 2000;4:873-88.], [MaIysz R. A generalization of fractal interpolation stochastic processes to higher dimension. Fractals 2001;9:415-28.], and [Herburt I. Box dimension of interpolations of self-similar processes with stationary increments. Probab Math Statist 2001;21:171-8.]. We prove that trajectories of fractal interpolation stochastic processes converge to the trajectory of the original process. We also show that convergence of the trajectories in fractal interpolation of stochastic processes is equivalent to the convergence of trajectories in linear interpolation

  3. 3D flyable curves for an autonomous aircraft

    Science.gov (United States)

    Bestaoui, Yasmina

    2012-11-01

    The process of conducting a mission for an autonomous aircraft includes determining the set of waypoints (flight planning) and the path for the aircraft to fly (path planning). The autonomous aircraft is an under-actuated system, having less control inputs than degrees of freedom and has two nonholonomic (non integrable) kinematic constraints. Consequently, the set of feasible trajectories will be restricted and the problem of trajectory generation becomes more complicated than a simple interpolation. Care must be taken in the selection of the basic primitives to respect the kinematic and dynamic limitations. The topic of this paper is trajectory generation using parametric curves. The problem can be formulated as follows: to lead the autonomous aircraft from an initial configuration qi to a final configuration qf in the absence of obstacles, find a trajectory q(t) for 0 ≤t ≤ T. The trajectory can be broken down into a geometric path q(s), s being the curvilinear abscissa and s=s(t) a temporal function. In 2D the curves fall into two categories: • Curves whose coordinates have a closed form expressions, for example B-splines, quintic polynomials or polar splines. • Curves whose curvature is a function of their arc length for example clothoids, cubic spirals, quintic or intrinsic splines. Some 3D solutions will be presented in this paper and their effectiveness discussed towards the problem in hand.

  4. A new 3-D ray tracing method based on LTI using successive partitioning of cell interfaces and traveltime gradients

    Science.gov (United States)

    Zhang, Dong; Zhang, Ting-Ting; Zhang, Xiao-Lei; Yang, Yan; Hu, Ying; Qin, Qian-Qing

    2013-05-01

    We present a new method of three-dimensional (3-D) seismic ray tracing, based on an improvement to the linear traveltime interpolation (LTI) ray tracing algorithm. This new technique involves two separate steps. The first involves a forward calculation based on the LTI method and the dynamic successive partitioning scheme, which is applied to calculate traveltimes on cell boundaries and assumes a wavefront that expands from the source to all grid nodes in the computational domain. We locate several dynamic successive partition points on a cell's surface, the traveltimes of which can be calculated by linear interpolation between the vertices of the cell's boundary. The second is a backward step that uses Fermat's principle and the fact that the ray path is always perpendicular to the wavefront and follows the negative traveltime gradient. In this process, the first-arriving ray path can be traced from the receiver to the source along the negative traveltime gradient, which can be calculated by reconstructing the continuous traveltime field with cubic B-spline interpolation. This new 3-D ray tracing method is compared with the LTI method and the shortest path method (SPM) through a number of numerical experiments. These comparisons show obvious improvements to computed traveltimes and ray paths, both in precision and computational efficiency.

  5. Modeling terminal ballistics using blending-type spline surfaces

    Science.gov (United States)

    Pedersen, Aleksander; Bratlie, Jostein; Dalmo, Rune

    2014-12-01

    We explore using GERBS, a blending-type spline construction, to represent deform able thin-plates and model terminal ballistics. Strategies to construct geometry for different scenarios of terminal ballistics are proposed.

  6. Shape-based interpolation of multidimensional grey-level images

    International Nuclear Information System (INIS)

    Grevera, G.J.; Udupa, J.K.

    1996-01-01

    Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

  7. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  8. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  9. Comparing interpolation schemes in dynamic receive ultrasound beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav

    2005-01-01

    In medical ultrasound interpolation schemes are of- ten applied in receive focusing for reconstruction of image points. This paper investigates the performance of various interpolation scheme by means of ultrasound simulations of point scatterers in Field II. The investigation includes conventional...... B-mode imaging and synthetic aperture (SA) imaging using a 192-element, 7 MHz linear array transducer with λ pitch as simulation model. The evaluation consists primarily of calculations of the side lobe to main lobe ratio, SLMLR, and the noise power of the interpolation error. When using...... conventional B-mode imaging and linear interpolation, the difference in mean SLMLR is 6.2 dB. With polynomial interpolation the ratio is in the range 6.2 dB to 0.3 dB using 2nd to 5th order polynomials, and with FIR interpolation the ratio is in the range 5.8 dB to 0.1 dB depending on the filter design...

  10. 5-D interpolation with wave-front attributes

    Science.gov (United States)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that

  11. High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform

    Science.gov (United States)

    Chan, Kenny K. H.; Tang, Shuo

    2010-01-01

    The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551

  12. {sup 3}He detector analysis of some special shielding materials

    Energy Technology Data Exchange (ETDEWEB)

    Avdic, S; Pesic, M [Boris Kidric, Institute of Nuclear Sciences, Beograd (Yugoslavia); Marinkovic, P [ETF Belgrade Univ. (Yugoslavia)

    1990-07-01

    The shielding properties of commercial materials of reactor Experiments, Inc. (R/X) were analyzed at the facility which includes bare heavy water experimental reactor RB with external neutron converter ENC, The fast neutron spectrum measurements in energy range from 1 MeV to 10 MeV was performed using ORTEC semiconductor neutron detector with He{sup 3} in diode coincidence arrangement. The neutron spectra have been evaluated from measured pulse-height distribution using numerical code HE3 for computation of detector efficiency in a collimated neutron beam. The neutron dose rates behind ENC with and without sample R/X material were determined using cubic spline interpolation routine for calculating the corresponding flux-dose rate conversion factors. Satisfactory shielding properties of the examined material in a fast neutron field in measurements and calculations are demonstrated. (author)

  13. Cubic metaplectic forms and theta functions

    CERN Document Server

    Proskurin, Nikolai

    1998-01-01

    The book is an introduction to the theory of cubic metaplectic forms on the 3-dimensional hyperbolic space and the author's research on cubic metaplectic forms on special linear and symplectic groups of rank 2. The topics include: Kubota and Bass-Milnor-Serre homomorphisms, cubic metaplectic Eisenstein series, cubic theta functions, Whittaker functions. A special method is developed and applied to find Fourier coefficients of the Eisenstein series and cubic theta functions. The book is intended for readers, with beginning graduate-level background, interested in further research in the theory of metaplectic forms and in possible applications.

  14. Interpolation of quasi-Banach spaces

    International Nuclear Information System (INIS)

    Tabacco Vignati, A.M.

    1986-01-01

    This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces

  15. Image Interpolation Scheme based on SVM and Improved PSO

    Science.gov (United States)

    Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

    2018-01-01

    In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

  16. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    Science.gov (United States)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  17. A smoothing spline that approximates Laplace transform functions only known on measurements on the real axis

    International Nuclear Information System (INIS)

    D’Amore, L; Campagna, R; Murli, A; Galletti, A; Marcellino, L

    2012-01-01

    The scientific and application-oriented interest in the Laplace transform and its inversion is testified by more than 1000 publications in the last century. Most of the inversion algorithms available in the literature assume that the Laplace transform function is available everywhere. Unfortunately, such an assumption is not fulfilled in the applications of the Laplace transform. Very often, one only has a finite set of data and one wants to recover an estimate of the inverse Laplace function from that. We propose a fitting model of data. More precisely, given a finite set of measurements on the real axis, arising from an unknown Laplace transform function, we construct a dth degree generalized polynomial smoothing spline, where d = 2m − 1, such that internally to the data interval it is a dth degree polynomial complete smoothing spline minimizing a regularization functional, and outside the data interval, it mimics the Laplace transform asymptotic behavior, i.e. it is a rational or an exponential function (the end behavior model), and at the boundaries of the data set it joins with regularity up to order m − 1, with the end behavior model. We analyze in detail the generalized polynomial smoothing spline of degree d = 3. This choice was motivated by the (ill)conditioning of the numerical computation which strongly depends on the degree of the complete spline. We prove existence and uniqueness of this spline. We derive the approximation error and give a priori and computable bounds of it on the whole real axis. In such a way, the generalized polynomial smoothing spline may be used in any real inversion algorithm to compute an approximation of the inverse Laplace function. Experimental results concerning Laplace transform approximation, numerical inversion of the generalized polynomial smoothing spline and comparisons with the exponential smoothing spline conclude the work. (paper)

  18. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

    Directory of Open Access Journals (Sweden)

    Hezerul Abdul Karim

    2004-09-01

    Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

  19. Three-dimensional reconstruction of the human spine from bi-planar radiographs: using multiscale wavelet analysis and spline interpolators for semi-automation

    Science.gov (United States)

    Deschenes, Sylvain; Godbout, Benoit; Branchaud, Dominic; Mitton, David; Pomero, Vincent; Bleau, Andre; Skalli, Wafa; de Guise, Jacques A.

    2003-05-01

    We propose a new fast stereoradiographic 3D reconstruction method for the spine. User input is limited to few points passing through the spine on two radiographs and two line segments representing the end plates of the limiting vertebrae. A 3D spline that hints the positions of the vertebrae in space is then generated. We then use wavelet multi-scale analysis (WMSA) to automatically localize specific features in both lateral and frontal radiographs. The WMSA gives an elegant spectral investigation that leads to gradient generation and edge extraction. Analysis of the information contained at several scales leads to the detection of 1) two curves enclosing the vertebral bodies' walls and 2) inter-vertebral spaces along the spine. From this data, we extract four points per vertebra per view, corresponding to the corners of the vertebral bodies. These points delimit a hexahedron in space where we can match the vertebral body. This hexahedron is then passed through a 3D statistical database built using local and global information generated from a bank of normal and scoliotic spines. Finally, models of the vertebrae are positioned with respect to these landmarks, completing the 3D reconstruction.

  20. B-spline solution of a singularly perturbed boundary value problem arising in biology

    International Nuclear Information System (INIS)

    Lin Bin; Li Kaitai; Cheng Zhengxing

    2009-01-01

    We use B-spline functions to develop a numerical method for solving a singularly perturbed boundary value problem associated with biology science. We use B-spline collocation method, which leads to a tridiagonal linear system. The accuracy of the proposed method is demonstrated by test problems. The numerical result is found in good agreement with exact solution.

  1. Differential Interpolation Effects in Free Recall

    Science.gov (United States)

    Petrusic, William M.; Jamieson, Donald G.

    1978-01-01

    Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…

  2. SAR image formation with azimuth interpolation after azimuth transform

    Science.gov (United States)

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  3. An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry

    Science.gov (United States)

    2015-12-01

    ARL-SR-0347 ● DEC 2015 US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary...US Army Research Laboratory An Investigation into Conversion from Non-Uniform Rational B-Spline Boundary Representation Geometry to...from Non-Uniform Rational B-Spline Boundary Representation Geometry to Constructive Solid Geometry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  4. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  5. B-Spline Approximations of the Gaussian, their Gabor Frame Properties, and Approximately Dual Frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Kim, Hong Oh; Kim, Rae Young

    2017-01-01

    We prove that Gabor systems generated by certain scaled B-splines can be considered as perturbations of the Gabor systems generated by the Gaussian, with a deviation within an arbitrary small tolerance whenever the order N of the B-spline is sufficiently large. As a consequence we show that for a...

  6. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  7. Cubic colloids : Synthesis, functionalization and applications

    NARCIS (Netherlands)

    Castillo, S.I.R.

    2015-01-01

    This thesis is a study on cubic colloids: micron-sized cubic particles with rounded corners (cubic superballs). Owing to their shape, particle packing for cubes is more efficient than for spheres and results in fascinating phase and packing behavior. For our cubes, the particle volume fraction when

  8. A FAST MORPHING-BASED INTERPOLATION FOR MEDICAL IMAGES: APPLICATION TO CONFORMAL RADIOTHERAPY

    Directory of Open Access Journals (Sweden)

    Hussein Atoui

    2011-05-01

    Full Text Available A method is presented for fast interpolation between medical images. The method is intended for both slice and projective interpolation. It allows offline interpolation between neighboring slices in tomographic data. Spatial correspondence between adjacent images is established using a block matching algorithm. Interpolation of image intensities is then carried out by morphing between the images. The morphing-based method is compared to standard linear interpolation, block-matching-based interpolation and registrationbased interpolation in 3D tomographic data sets. Results show that the proposed method scored similar performance in comparison to registration-based interpolation, and significantly outperforms both linear and block-matching-based interpolation. This method is applied in the context of conformal radiotherapy for online projective interpolation between Digitally Reconstructed Radiographs (DRRs.

  9. Permanently calibrated interpolating time counter

    International Nuclear Information System (INIS)

    Jachna, Z; Szplet, R; Kwiatkowski, P; Różyc, K

    2015-01-01

    We propose a new architecture of an integrated time interval counter that provides its permanent calibration in the background. Time interval measurement and the calibration procedure are based on the use of a two-stage interpolation method and parallel processing of measurement and calibration data. The parallel processing is achieved by a doubling of two-stage interpolators in measurement channels of the counter, and by an appropriate extension of control logic. Such modification allows the updating of transfer characteristics of interpolators without the need to break a theoretically infinite measurement session. We describe the principle of permanent calibration, its implementation and influence on the quality of the counter. The precision of the presented counter is kept at a constant level (below 20 ps) despite significant changes in the ambient temperature (from −10 to 60 °C), which can cause a sevenfold decrease in the precision of the counter with a traditional calibration procedure. (paper)

  10. Transfinite C2 interpolant over triangles

    International Nuclear Information System (INIS)

    Alfeld, P.; Barnhill, R.E.

    1984-01-01

    A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix

  11. Fractional Delayer Utilizing Hermite Interpolation with Caratheodory Representation

    Directory of Open Access Journals (Sweden)

    Qiang DU

    2018-04-01

    Full Text Available Fractional delay is indispensable for many sorts of circuits and signal processing applications. Fractional delay filter (FDF utilizing Hermite interpolation with an analog differentiator is a straightforward way to delay discrete signals. This method has a low time-domain error, but a complicated sampling module than the Shannon sampling scheme. A simplified scheme, which is based on Shannon sampling and utilizing Hermite interpolation with a digital differentiator, will lead a much higher time-domain error when the signal frequency approaches the Nyquist rate. In this letter, we propose a novel fractional delayer utilizing Hermite interpolation with Caratheodory representation. The samples of differential signal are obtained by Caratheodory representation from the samples of the original signal only. So, only one sampler is needed and the sampling module is simple. Simulation results for four types of signals demonstrate that the proposed method has significantly higher interpolation accuracy than Hermite interpolation with digital differentiator.

  12. Implementation of exterior complex scaling in B-splines to solve atomic and molecular collision problems

    International Nuclear Information System (INIS)

    McCurdy, C William; MartIn, Fernando

    2004-01-01

    B-spline methods are now well established as widely applicable tools for the evaluation of atomic and molecular continuum states. The mathematical technique of exterior complex scaling has been shown, in a variety of other implementations, to be a powerful method with which to solve atomic and molecular scattering problems, because it allows the correct imposition of continuum boundary conditions without their explicit analytic application. In this paper, an implementation of exterior complex scaling in B-splines is described that can bring the well-developed technology of B-splines to bear on new problems, including multiple ionization and breakup problems, in a straightforward way. The approach is demonstrated for examples involving the continuum motion of nuclei in diatomic molecules as well as electronic continua. For problems involving electrons, a method based on Poisson's equation is presented for computing two-electron integrals over B-splines under exterior complex scaling

  13. Thin-plate spline analysis of mandibular growth.

    Science.gov (United States)

    Franchi, L; Baccetti, T; McNamara, J A

    2001-04-01

    The analysis of mandibular growth changes around the pubertal spurt in humans has several important implications for the diagnosis and orthopedic correction of skeletal disharmonies. The purpose of this study was to evaluate mandibular shape and size growth changes around the pubertal spurt in a longitudinal sample of subjects with normal occlusion by means of an appropriate morphometric technique (thin-plate spline analysis). Ten mandibular landmarks were identified on lateral cephalograms of 29 subjects at 6 different developmental phases. The 6 phases corresponded to 6 different maturational stages in cervical vertebrae during accelerative and decelerative phases of the pubertal growth curve of the mandible. Differences in shape between average mandibular configurations at the 6 developmental stages were visualized by means of thin-plate spline analysis and subjected to permutation test. Centroid size was used as the measure of the geometric size of each mandibular specimen. Differences in size at the 6 developmental phases were tested statistically. The results of graphical analysis indicated a statistically significant change in mandibular shape only for the growth interval from stage 3 to stage 4 in cervical vertebral maturation. Significant increases in centroid size were found at all developmental phases, with evidence of a prepubertal minimum and of a pubertal maximum. The existence of a pubertal peak in human mandibular growth, therefore, is confirmed by thin-plate spline analysis. Significant morphological changes in the mandible during the growth interval from stage 3 to stage 4 in cervical vertebral maturation may be described as an upward-forward direction of condylar growth determining an overall "shrinkage" of the mandibular configuration along the measurement of total mandibular length. This biological mechanism is particularly efficient in compensating for major increments in mandibular size at the adolescent spurt.

  14. Prostate multimodality image registration based on B-splines and quadrature local energy.

    Science.gov (United States)

    Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice

    2012-05-01

    Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.

  15. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    OpenAIRE

    He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han

    2015-01-01

    Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...

  16. Spline-based image-to-volume registration for three-dimensional electron microscopy

    International Nuclear Information System (INIS)

    Jonic, S.; Sorzano, C.O.S.; Thevenaz, P.; El-Bez, C.; De Carlo, S.; Unser, M.

    2005-01-01

    This paper presents an algorithm based on a continuous framework for a posteriori angular and translational assignment in three-dimensional electron microscopy (3DEM) of single particles. Our algorithm can be used advantageously to refine the assignment of standard quantized-parameter methods by registering the images to a reference 3D particle model. We achieve the registration by employing a gradient-based iterative minimization of a least-squares measure of dissimilarity between an image and a projection of the volume in the Fourier transform (FT) domain. We compute the FT of the projection using the central-slice theorem (CST). To compute the gradient accurately, we take advantage of a cubic B-spline model of the data in the frequency domain. To improve the robustness of the algorithm, we weight the cost function in the FT domain and apply a 'mixed' strategy for the assignment based on the minimum value of the cost function at registration for several different initializations. We validate our algorithm in a fully controlled simulation environment. We show that the mixed strategy improves the assignment accuracy; on our data, the quality of the angular and translational assignment was better than 2 voxel (i.e., 6.54 A). We also test the performance of our algorithm on real EM data. We conclude that our algorithm outperforms a standard projection-matching refinement in terms of both consistency of 3D reconstructions and speed

  17. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  18. Fast image interpolation via random forests.

    Science.gov (United States)

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  19. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  20. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

    2011-01-01

    Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

  1. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  2. Spline methods for conversation equations

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    The consider the numerical solution of physical theories, in particular hydrodynamics, which can be formulated as systems of conservation laws. To this end we briefly describe the Basis Spline and collocation methods, paying particular attention to representation theory, which provides discrete analogues of the continuum conservation and dispersion relations, and hence a rigorous understanding of errors and instabilities. On this foundation we propose an algorithm for hydrodynamic problems in which most linear and nonlinear instabilities are brought under control. Numerical examples are presented from one-dimensional relativistic hydrodynamics. 9 refs., 10 figs

  3. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    International Nuclear Information System (INIS)

    Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.

    2007-01-01

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data

  4. Integration and interpolation of sampled waveforms

    International Nuclear Information System (INIS)

    Stearns, S.D.

    1978-01-01

    Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed

  5. Initial design of a stall-controlled wind turbine rotor

    Energy Technology Data Exchange (ETDEWEB)

    Nygaard, T.A. [Inst. for Energiteknikk, Kjeller (Norway)

    1997-08-01

    A model intended for initial design of stall-controlled wind turbine rotors is described. The user specifies relative radial position of an arbitrary number of airfoil sections, referring to a data file containing lift-and drag curves. The data file is on the same format as used in the commercial blade-element code BLADES-/2/, where lift- and drag coefficients are interpolated from tables as function of Reynolds number, relative thickness and angle of attack. The user can set constraints on a selection of the following: Maximum power; Maximum thrust in operation; Maximum root bending moment in operation; Extreme root bending moment, parked rotor; Tip speed; Upper and lower bounds on optimisation variables. The optimisation variables can be selected from: Blade radius; Rotational speed; Chord and twist at an arbitrary number of radial positions. The user can chose linear chord distribution and a hyperbola-like twist distribution to ensure smooth planform and twist, or cubic spline interpolation for one or both. The aerodynamic model is based on classical strip theory with Prandtl tip loss correction, supplemented by empirical data for high induction factors. (EG)

  6. Extension of the tables of PDD, TMR and equivalent squares of the BJR 25

    International Nuclear Information System (INIS)

    Balmaceda, Daniel; Rojas, Jorge E.

    2004-01-01

    The cancer patients treatment times were revised manually in the Servicio de Radioterapia del Hospital Mexico. Those patients will be treated in the 2 units of Cobalt-60. Theraplan Plus (version 3.5) is the system to make dosimetries, after of its results, the medical physician or any of the dosimeters do the manual revision of the treatment times. In this manual revision are utilized the tables of PDD, TAR and equivalent squares of BJR Supplement 25 prepared by the British Institute of Radiology and the Institution of Physics and Engineering in Medicine and Biology. The table of PDD of BJR 25 (Cobalt-60, 80 cm. SSD) was enlarged in the year 2000 by mean of interpolation with polynomial of Lagrange by one author of this job, to use of the personnel of the Servicio de Radioterapia del Hospital Mexico. In this job it is applied other scheme of interpolation (Cubic Splines) for the table of PDD, the tables of TAR and the equivalent squares of rectangular fields of BJR 25. It is presented in this report the methodology followed as well as the final versions of the tables [es

  7. Stress in closed thin-walled tubes of single box subjected by shear forces and application to airfoils

    Directory of Open Access Journals (Sweden)

    Zebbiche Toufik

    2014-09-01

    Full Text Available The presented work is to develop a numerical computation program to determine the distribution of the shear stress to shear in closed tubes with asymmetric single thin wall section with a constant thickness and applications to airfoils and therefore determining the position and value of the maximum stress. In the literature, there are exact analytical solutions only for some sections of simple geometries such as circular section. Hence our interest is focused on the search of approximate numerical solutions for more complex sections used in aeronautics. In the second stage the position of the shear center is determined so that the section does not undergo torsion. The analytic function of the boundary of the airfoil is obtained by using the cubic spline interpolation since it is given in the form of tabulated points.

  8. Iolite

    DEFF Research Database (Denmark)

    Paton, Chad; Hellstrom, John; Paul, Bence

    2011-01-01

    is placed on the interpolation of parameters that vary with time by a variety of user selectable methods including smoothed cubic splines. Data are processed on a timeslice-by-timeslice basis, allowing outlier rejection and calculation of statistics to be employed directly on calculated results......Iolite is a non-commercial software package developed to aid in the processing of inorganic mass spectrometric data, with a strong emphasis on visualisation versus time of acquisition. The goal of the software is to provide a powerful framework for data processing and interpretation, while giving...... users the ability to implement their own data reduction protocols. It is intended to be highly interactive, providing the user with a complete overview of the data at all stages of processing, and allowing the freedom to change parameters and reprocess data at any point. The program presents a variety...

  9. Bayer Demosaicking with Polynomial Interpolation.

    Science.gov (United States)

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  10. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    Science.gov (United States)

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  11. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  12. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-01

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  13. Spline smoothing of histograms by linear programming

    Science.gov (United States)

    Bennett, J. O.

    1972-01-01

    An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.

  14. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    Science.gov (United States)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  15. Three-dimensional data interpolation for environmental purpose: lead in contaminated soils in southern Brazil.

    Science.gov (United States)

    Piedade, Tales Campos; Melo, Vander Freitas; Souza, Luiz Cláudio Paula; Dieckow, Jeferson

    2014-09-01

    Monitoring of heavy metal contamination plume in soils can be helpful in establishing strategies to minimize its hazardous impacts to the environment. The objective of this study was to apply a new approach of visualization, based on tridimensional (3D) images, of pseudo-total (extracted with concentrated acids) and exchangeable (extracted with 0.5 mol L(-1) Ca(NO3)2) lead (Pb) concentrations in soils of a mining and metallurgy area to determine the spatial distribution of this pollutant and to estimate the most contaminated soil volumes. Tridimensional images were obtained after interpolation of Pb concentrations of 171 soil samples (57 points × 3 depths) with regularized spline with tension in a 3D function version. The tridimensional visualization showed great potential of use in environmental studies and allowed to determine the spatial 3D distribution of Pb contamination plume in the area and to establish relationships with soil characteristics, landscape, and pollution sources. The most contaminated soil volumes (10,001 to 52,000 mg Pb kg(-1)) occurred near the metallurgy factory. The main contamination sources were attributed to atmospheric emissions of particulate Pb through chimneys. The large soil volume estimated to be removed to industrial landfills or co-processing evidenced the difficulties related to this practice as a remediation strategy.

  16. Interpolation for a subclass of H

    Indian Academy of Sciences (India)

    |g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.

  17. Spline function fit for multi-sets of correlative data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhou Hongmo

    1992-01-01

    The Spline fit method for multi-sets of correlative data is developed. The properties of correlative data fit are investigated. The data of 23 Na(n, 2n) cross section are fitted in the cases with and without correlation

  18. Surface interpolation with radial basis functions for medical imaging

    International Nuclear Information System (INIS)

    Carr, J.C.; Beatson, R.K.; Fright, W.R.

    1997-01-01

    Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (<300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill

  19. A fractional spline collocation-Galerkin method for the time-fractional diffusion equation

    Directory of Open Access Journals (Sweden)

    Pezza L.

    2018-03-01

    Full Text Available The aim of this paper is to numerically solve a diffusion differential problem having time derivative of fractional order. To this end we propose a collocation-Galerkin method that uses the fractional splines as approximating functions. The main advantage is in that the derivatives of integer and fractional order of the fractional splines can be expressed in a closed form that involves just the generalized finite difference operator. This allows us to construct an accurate and efficient numerical method. Several numerical tests showing the effectiveness of the proposed method are presented.

  20. Modeling Fetal Weight for Gestational Age: A Comparison of a Flexible Multi-level Spline-based Model with Other Approaches

    Science.gov (United States)

    Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W

    2011-01-01

    We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571

  1. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  2. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    Science.gov (United States)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  3. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  4. Linear and Quadratic Interpolators Using Truncated-Matrix Multipliers and Squarers

    Directory of Open Access Journals (Sweden)

    E. George Walters III

    2015-11-01

    Full Text Available This paper presents a technique for designing linear and quadratic interpolators for function approximation using truncated multipliers and squarers. Initial coefficient values are found using a Chebyshev-series approximation and then adjusted through exhaustive simulation to minimize the maximum absolute error of the interpolator output. This technique is suitable for any function and any precision up to 24 bits (IEEE single precision. Designs for linear and quadratic interpolators that implement the 1/x, 1/ √ x, log2(1+2x, log2(x and 2x functions are presented and analyzed as examples. Results show that a proposed 24-bit interpolator computing 1/x with a design specification of ±1 unit in the last place of the product (ulp error uses 16.4% less area and 15.3% less power than a comparable standard interpolator with the same error specification. Sixteen-bit linear interpolators for other functions are shown to use up to 17.3% less area and 12.1% less power, and 16-bit quadratic interpolators are shown to use up to 25.8% less area and 24.7% less power.

  5. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Science.gov (United States)

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  6. Improved Interpolation Kernels for Super-resolution Algorithms

    DEFF Research Database (Denmark)

    Rasti, Pejman; Orlova, Olga; Tamberg, Gert

    2016-01-01

    Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

  7. On Multiple Interpolation Functions of the -Genocchi Polynomials

    Directory of Open Access Journals (Sweden)

    Jin Jeong-Hee

    2010-01-01

    Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.

  8. Steady State Stokes Flow Interpolation for Fluid Control

    DEFF Research Database (Denmark)

    Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert

    2012-01-01

    — suffer from a common problem. They fail to capture the rotational components of the velocity field, although extrapolation in the normal direction does consider the tangential component. We address this problem by casting the interpolation as a steady state Stokes flow. This type of flow captures......Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...

  9. Evaluation of the spline reconstruction technique for PET

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Fernández, Yolanda [Centre d’Imatge Molecular Experimental (CIME), CETIR-ERESA, Barcelona 08950 (Spain); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London NW1 2BU (United Kingdom); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA (United Kingdom)

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real

  10. Quadratic Interpolation and Linear Lifting Design

    Directory of Open Access Journals (Sweden)

    Joel Solé

    2007-03-01

    Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.

  11. A theoretical introduction to numerical analysis

    CERN Document Server

    Ryaben'kii, Victor S

    2006-01-01

    PREFACE ACKNOWLEDGMENTS INTRODUCTION Discretization Conditioning Error On Methods of Computation INTERPOLATION OF FUNCTIONS. QUADRATURES ALGEBRAIC INTERPOLATION Existence and Uniqueness of Interpolating Polynomial Classical Piecewise Polynomial Interpolation Smooth Piecewise Polynomial Interpolation (Splines) Interpolation of Functions of Two Variables TRIGONOMETRIC INTERPOLATION Interpolation of Periodic Functions Interpolation of Functions on an Interval. Relation between Algebraic and Trigonometric Interpolation COMPUTATION OF DEFINITE INTEGRALS. QUADRATURES Trapezoidal Rule, Simpson's Formula, and the Like Quadrature Formulae with No Saturation. Gaussian Quadratures Improper Integrals. Combination of Numerical and Analytical Methods Multiple Integrals SYSTEMS OF SCALAR EQUATIONS SYSTEMS OF LINEAR ALGEBRAIC EQUATIONS: DIRECT METHODS Different Forms of Consistent Linear Systems Linear Spaces, Norms, and Operators Conditioning of Linear Systems Gaussian Elimination and Its Tri-Diag...

  12. The basis spline method and associated techniques

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.

    1989-01-01

    We outline the Basis Spline and Collocation methods for the solution of Partial Differential Equations. Particular attention is paid to the theory of errors, and the handling of non-self-adjoint problems which are generated by the collocation method. We discuss applications to Poisson's equation, the Dirac equation, and the calculation of bound and continuum states of atomic and nuclear systems. 12 refs., 6 figs

  13. Comparative Performance of Complex-Valued B-Spline and Polynomial Models Applied to Iterative Frequency-Domain Decision Feedback Equalization of Hammerstein Channels.

    Science.gov (United States)

    Chen, Sheng; Hong, Xia; Khalaf, Emad F; Alsaadi, Fuad E; Harris, Chris J

    2017-12-01

    Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach. Advantages of B-spline neural network approach as compared with the polynomial based modeling approach are extensively discussed, and the effectiveness of the CV neural network-based approach is demonstrated in a real-world application. More specifically, we evaluate the comparative performance of the CV B-spline and polynomial-based approaches for the nonlinear iterative frequency-domain decision feedback equalization (NIFDDFE) of single-carrier Hammerstein channels. Our results confirm the superior performance of the CV B-spline-based NIFDDFE over its CV polynomial-based counterpart.

  14. Spline-Interpolation Solution of One Elasticity Theory Problem

    CERN Document Server

    Shirakova, Elena A

    2011-01-01

    The book presents methods of approximate solution of the basic problem of elasticity for special types of solids. Engineers can apply the approximate methods (Finite Element Method, Boundary Element Method) to solve the problems but the application of these methods may not be correct for solids with the certain singularities or asymmetrical boundary conditions. The book is recommended for researchers and professionals working on elasticity modeling. It explains methods of solving elasticity problems for special solids. Approximate methods (Finite Element Method, Boundary Element Method) have b

  15. Differential constraints for bounded recursive identification with multivariate splines

    NARCIS (Netherlands)

    De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2011-01-01

    The ability to perform online model identification for nonlinear systems with unknown dynamics is essential to any adaptive model-based control system. In this paper, a new differential equality constrained recursive least squares estimator for multivariate simplex splines is presented that is able

  16. Constructing C1 Continuous Surface on Irregular Quad Meshes

    Institute of Scientific and Technical Information of China (English)

    HE Jun; GUO Qiang

    2013-01-01

    A new method is proposed for surface construction on irregular quad meshes as extensions to uniform B-spline surfaces. Given a number of control points, which form a regular or irregular quad mesh, a weight function is constructed for each control point. The weight function is defined on a local domain and is C1 continuous. Then the whole surface is constructed by the weighted combination of all the control points. The property of the new method is that the surface is defined by piecewise C1 bi-cubic rational parametric polynomial with each quad face. It is an extension to uniform B-spline surfaces in the sense that its definition is an analogy of the B-spline surface, and it produces a uniform bi-cubic B-spline surface if the control mesh is a regular quad mesh. Examples produced by the new method are also included.

  17. A Bayesian-optimized spline representation of the electrocardiogram

    International Nuclear Information System (INIS)

    Guilak, F G; McNames, J

    2013-01-01

    We introduce an implementation of a novel spline framework for parametrically representing electrocardiogram (ECG) waveforms. This implementation enables a flexible means to study ECG structure in large databases. Our algorithm allows researchers to identify key points in the waveform and optimally locate them in long-term recordings with minimal manual effort, thereby permitting analysis of trends in the points themselves or in metrics derived from their locations. In the work described here we estimate the location of a number of commonly-used characteristic points of the ECG signal, defined as the onsets, peaks, and offsets of the P, QRS, T, and R′ waves. The algorithm applies Bayesian optimization to a linear spline representation of the ECG waveform. The location of the knots—which are the endpoints of the piecewise linear segments used in the spline representation of the signal—serve as the estimate of the waveform’s characteristic points. We obtained prior information of knot times, amplitudes, and curvature from a large manually-annotated training dataset and used the priors to optimize a Bayesian figure of merit based on estimated knot locations. In cases where morphologies vary or are subject to noise, the algorithm relies more heavily on the estimated priors for its estimate of knot locations. We compared optimized knot locations from our algorithm to two sets of manual annotations on a prospective test data set comprising 200 beats from 20 subjects not in the training set. Mean errors of characteristic point locations were less than four milliseconds, and standard deviations of errors compared favorably against reference values. This framework can easily be adapted to include additional points of interest in the ECG signal or for other biomedical detection problems on quasi-periodic signals. (paper)

  18. Trace interpolation by slant-stack migration

    International Nuclear Information System (INIS)

    Novotny, M.

    1990-01-01

    The slant-stack migration formula based on the radon transform is studied with respect to the depth steep Δz of wavefield extrapolation. It can be viewed as a generalized trace-interpolation procedure including wave extrapolation with an arbitrary step Δz. For Δz > 0 the formula yields the familiar plane-wave decomposition, while for Δz > 0 it provides a robust tool for migration transformation of spatially under sampled wavefields. Using the stationary phase method, it is shown that the slant-stack migration formula degenerates into the Rayleigh-Sommerfeld integral in the far-field approximation. Consequently, even a narrow slant-stack gather applied before the diffraction stack can significantly improve the representation of noisy data in the wavefield extrapolation process. The theory is applied to synthetic and field data to perform trace interpolation and dip reject filtration. The data examples presented prove that the radon interpolator works well in the dip range, including waves with mutual stepouts smaller than half the dominant period

  19. B-spline tight frame based force matching method

    Science.gov (United States)

    Yang, Jianbin; Zhu, Guanhua; Tong, Dudu; Lu, Lanyuan; Shen, Zuowei

    2018-06-01

    In molecular dynamics simulations, compared with popular all-atom force field approaches, coarse-grained (CG) methods are frequently used for the rapid investigations of long time- and length-scale processes in many important biological and soft matter studies. The typical task in coarse-graining is to derive interaction force functions between different CG site types in terms of their distance, bond angle or dihedral angle. In this paper, an ℓ1-regularized least squares model is applied to form the force functions, which makes additional use of the B-spline wavelet frame transform in order to preserve the important features of force functions. The B-spline tight frames system has a simple explicit expression which is useful for representing our force functions. Moreover, the redundancy of the system offers more resilience to the effects of noise and is useful in the case of lossy data. Numerical results for molecular systems involving pairwise non-bonded, three and four-body bonded interactions are obtained to demonstrate the effectiveness of our approach.

  20. A MATLAB®-based program for 3D visualization of stratigraphic setting and subsidence evolution of sedimentary basins: example application to the Vienna Basin

    Science.gov (United States)

    Lee, Eun Young; Novotny, Johannes; Wagreich, Michael

    2015-04-01

    In recent years, 3D visualization of sedimentary basins has become increasingly popular. Stratigraphic and structural mapping is highly important to understand the internal setting of sedimentary basins. And subsequent subsidence analysis provides significant insights for basin evolution. This study focused on developing a simple and user-friendly program which allows geologists to analyze and model sedimentary basin data. The developed program is aimed at stratigraphic and subsidence modelling of sedimentary basins from wells or stratigraphic profile data. This program is mainly based on two numerical methods; surface interpolation and subsidence analysis. For surface visualization four different interpolation techniques (Linear, Natural, Cubic Spline, and Thin-Plate Spline) are provided in this program. The subsidence analysis consists of decompaction and backstripping techniques. The numerical methods are computed in MATLAB® which is a multi-paradigm numerical computing environment used extensively in academic, research, and industrial fields. This program consists of five main processing steps; 1) setup (study area and stratigraphic units), 2) loading of well data, 3) stratigraphic modelling (depth distribution and isopach plots), 4) subsidence parameter input, and 5) subsidence modelling (subsided depth and subsidence rate plots). The graphical user interface intuitively guides users through all process stages and provides tools to analyse and export the results. Interpolation and subsidence results are cached to minimize redundant computations and improve the interactivity of the program. All 2D and 3D visualizations are created by using MATLAB plotting functions, which enables users to fine-tune the visualization results using the full range of available plot options in MATLAB. All functions of this program are illustrated with a case study of Miocene sediments in the Vienna Basin. The basin is an ideal place to test this program, because sufficient data is

  1. Radial basis function interpolation of unstructured, three-dimensional, volumetric particle tracking velocimetry data

    International Nuclear Information System (INIS)

    Casa, L D C; Krueger, P S

    2013-01-01

    Unstructured three-dimensional fluid velocity data were interpolated using Gaussian radial basis function (RBF) interpolation. Data were generated to imitate the spatial resolution and experimental uncertainty of a typical implementation of defocusing digital particle image velocimetry. The velocity field associated with a steadily rotating infinite plate was simulated to provide a bounded, fully three-dimensional analytical solution of the Navier–Stokes equations, allowing for robust analysis of the interpolation accuracy. The spatial resolution of the data (i.e. particle density) and the number of RBFs were varied in order to assess the requirements for accurate interpolation. Interpolation constraints, including boundary conditions and continuity, were included in the error metric used for the least-squares minimization that determines the interpolation parameters to explore methods for improving RBF interpolation results. Even spacing and logarithmic spacing of RBF locations were also investigated. Interpolation accuracy was assessed using the velocity field, divergence of the velocity field, and viscous torque on the rotating boundary. The results suggest that for the present implementation, RBF spacing of 0.28 times the boundary layer thickness is sufficient for accurate interpolation, though theoretical error analysis suggests that improved RBF positioning may yield more accurate results. All RBF interpolation results were compared to standard Gaussian weighting and Taylor expansion interpolation methods. Results showed that RBF interpolation improves interpolation results compared to the Taylor expansion method by 60% to 90% based on the average squared velocity error and provides comparable velocity results to Gaussian weighted interpolation in terms of velocity error. RMS accuracy of the flow field divergence was one to two orders of magnitude better for the RBF interpolation compared to the other two methods. RBF interpolation that was applied to

  2. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    Science.gov (United States)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  3. Stable isogeometric analysis of trimmed geometries

    Science.gov (United States)

    Marussig, Benjamin; Zechner, Jürgen; Beer, Gernot; Fries, Thomas-Peter

    2017-04-01

    We explore extended B-splines as a stable basis for isogeometric analysis with trimmed parameter spaces. The stabilization is accomplished by an appropriate substitution of B-splines that may lead to ill-conditioned system matrices. The construction for non-uniform knot vectors is presented. The properties of extended B-splines are examined in the context of interpolation, potential, and linear elasticity problems and excellent results are attained. The analysis is performed by an isogeometric boundary element formulation using collocation. It is argued that extended B-splines provide a flexible and simple stabilization scheme which ideally suits the isogeometric paradigm.

  4. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  5. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  6. Extension Of Lagrange Interpolation

    Directory of Open Access Journals (Sweden)

    Mousa Makey Krady

    2015-01-01

    Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.

  7. Multivariate Epi-splines and Evolving Function Identification Problems

    Science.gov (United States)

    2015-04-15

    such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction

  8. Splines under tension for gridding three-dimensional data

    International Nuclear Information System (INIS)

    Brand, H.R.; Frazer, J.W.

    1982-01-01

    By use of the splines-under-tension concept, a simple algorithm has been developed for the three-dimensional representation of nonuniformly spaced data. The representations provide useful information to the experimentalist when he is attempting to understand the results obtained in a self-adaptive experiment. The shortcomings of the algorithm are discussed as well as the advantages

  9. Interpolation and sampling in spaces of analytic functions

    CERN Document Server

    Seip, Kristian

    2004-01-01

    The book is about understanding the geometry of interpolating and sampling sequences in classical spaces of analytic functions. The subject can be viewed as arising from three classical topics: Nevanlinna-Pick interpolation, Carleson's interpolation theorem for H^\\infty, and the sampling theorem, also known as the Whittaker-Kotelnikov-Shannon theorem. The book aims at clarifying how certain basic properties of the space at hand are reflected in the geometry of interpolating and sampling sequences. Key words for the geometric descriptions are Carleson measures, Beurling densities, the Nyquist rate, and the Helson-Szegő condition. The book is based on six lectures given by the author at the University of Michigan. This is reflected in the exposition, which is a blend of informal explanations with technical details. The book is essentially self-contained. There is an underlying assumption that the reader has a basic knowledge of complex and functional analysis. Beyond that, the reader should have some familiari...

  10. Effects of early activator treatment in patients with class II malocclusion evaluated by thin-plate spline analysis.

    Science.gov (United States)

    Lux, C J; Rübel, J; Starke, J; Conradt, C; Stellzig, P A; Komposch, P G

    2001-04-01

    The aim of the present longitudinal cephalometric study was to evaluate the dentofacial shape changes induced by activator treatment between 9.5 and 11.5 years in male Class II patients. For a rigorous morphometric analysis, a thin-plate spline analysis was performed to assess and visualize dental and skeletal craniofacial changes. Twenty male patients with a skeletal Class II malrelationship and increased overjet who had been treated at the University of Heidelberg with a modified Andresen-Häupl-type activator were compared with a control group of 15 untreated male subjects of the Belfast Growth Study. The shape changes for each group were visualized on thin-plate splines with one spline comprising all 13 landmarks to show all the craniofacial shape changes, including skeletal and dento-alveolar reactions, and a second spline based on 7 landmarks to visualize only the skeletal changes. In the activator group, the grid deformation of the total spline pointed to a strong activator-induced reduction of the overjet that was caused both by a tipping of the incisors and by a moderation of sagittal discrepancies, particularly a slight advancement of the mandible. In contrast with this, in the control group, only slight localized shape changes could be detected. Both in the 7- and 13-landmark configurations, the shape changes between the groups differed significantly at P thin-plate spline analysis turned out to be a useful morphometric supplement to conventional cephalometrics because the complex patterns of shape change could be suggestively visualized.

  11. Study on Scattered Data Points Interpolation Method Based on Multi-line Structured Light

    International Nuclear Information System (INIS)

    Fan, J Y; Wang, F G; W, Y; Zhang, Y L

    2006-01-01

    Aiming at the range image obtained through multi-line structured light, a regional interpolation method is put forward in this paper. This method divides interpolation into two parts according to the memory format of the scattered data, one is interpolation of the data on the stripes, and the other is interpolation of data between the stripes. Trend interpolation method is applied to the data on the stripes, and Gauss wavelet interpolation method is applied to the data between the stripes. Experiments prove regional interpolation method feasible and practical, and it also promotes the speed and precision

  12. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    Science.gov (United States)

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  13. Quintic hyperbolic nonpolynomial spline and finite difference method for nonlinear second order differential equations and its application

    Directory of Open Access Journals (Sweden)

    Navnit Jha

    2014-04-01

    Full Text Available An efficient numerical method based on quintic nonpolynomial spline basis and high order finite difference approximations has been presented. The scheme deals with the space containing hyperbolic and polynomial functions as spline basis. With the help of spline functions we derive consistency conditions and high order discretizations of the differential equation with the significant first order derivative. The error analysis of the new method is discussed briefly. The new method is analyzed for its efficiency using the physical problems. The order and accuracy of the proposed method have been analyzed in terms of maximum errors and root mean square errors.

  14. Spectral interpolation - Zero fill or convolution. [image processing

    Science.gov (United States)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  15. An efficient coupled polynomial interpolation scheme to eliminate material-locking in the Euler-Bernoulli piezoelectric beam finite element

    Directory of Open Access Journals (Sweden)

    Litesh N. Sulbhewar

    Full Text Available The convergence characteristic of the conventional two-noded Euler-Bernoulli piezoelectric beam finite element depends on the configuration of the beam cross-section. The element shows slower convergence for the asymmetric material distribution in the beam cross-section due to 'material-locking' caused by extension-bending coupling. Hence, the use of conventional Euler-Bernoulli beam finite element to analyze piezoelectric beams which are generally made of the host layer with asymmetrically surface bonded piezoelectric layers/patches, leads to increased computational effort to yield converged results. Here, an efficient coupled polynomial interpolation scheme is proposed to improve the convergence of the Euler-Bernoulli piezoelectric beam finite elements, by eliminating ill-effects of material-locking. The equilibrium equations, derived using a variational formulation, are used to establish relationships between field variables. These relations are used to find a coupled quadratic polynomial for axial displacement, having contributions from an assumed cubic polynomial for transverse displacement and assumed linear polynomials for layerwise electric potentials. A set of coupled shape functions derived using these polynomials efficiently handles extension-bending and electromechanical couplings at the field interpolation level itself in a variationally consistent manner, without increasing the number of nodal degrees of freedom. The comparison of results obtained from numerical simulation of test problems shows that the convergence characteristic of the proposed element is insensitive to the material configuration of the beam cross-section.

  16. A parameterization of observer-based controllers: Bumpless transfer by covariance interpolation

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Komareji, Mohammad

    2009-01-01

    This paper presents an algorithm to interpolate between two observer-based controllers for a linear multivariable system such that the closed loop system remains stable throughout the interpolation. The method interpolates between the inverse Lyapunov functions for the two original state feedback...

  17. T-Spline Based Unifying Registration Procedure for Free-Form Surface Workpieces in Intelligent CMM

    Directory of Open Access Journals (Sweden)

    Zhenhua Han

    2017-10-01

    Full Text Available With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs. To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs.

  18. Smoothing two-dimensional Malaysian mortality data using P-splines indexed by age and year

    Science.gov (United States)

    Kamaruddin, Halim Shukri; Ismail, Noriszura

    2014-06-01

    Nonparametric regression implements data to derive the best coefficient of a model from a large class of flexible functions. Eilers and Marx (1996) introduced P-splines as a method of smoothing in generalized linear models, GLMs, in which the ordinary B-splines with a difference roughness penalty on coefficients is being used in a single dimensional mortality data. Modeling and forecasting mortality rate is a problem of fundamental importance in insurance company calculation in which accuracy of models and forecasts are the main concern of the industry. The original idea of P-splines is extended to two dimensional mortality data. The data indexed by age of death and year of death, in which the large set of data will be supplied by Department of Statistics Malaysia. The extension of this idea constructs the best fitted surface and provides sensible prediction of the underlying mortality rate in Malaysia mortality case.

  19. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.

    2010-08-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  20. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.; Carroll, R.J.; Wand, M.P.

    2010-01-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  1. Application of multivariate splines to discrete mathematics

    OpenAIRE

    Xu, Zhiqiang

    2005-01-01

    Using methods developed in multivariate splines, we present an explicit formula for discrete truncated powers, which are defined as the number of non-negative integer solutions of linear Diophantine equations. We further use the formula to study some classical problems in discrete mathematics as follows. First, we extend the partition function of integers in number theory. Second, we exploit the relation between the relative volume of convex polytopes and multivariate truncated powers and giv...

  2. On using smoothing spline and residual correction to fuse rain gauge observations and remote sensing data

    Science.gov (United States)

    Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei

    2014-01-01

    Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.

  3. The smoothing and fast Fourier transformation of experimental X-ray and neutron data from amorphous materials

    International Nuclear Information System (INIS)

    Dixon, M.; Wright, A.C.; Hutchinson, P.

    1977-01-01

    The application of fast Fourier transformation techniques to the analysis of experimental X-ray and neutron diffraction patterns from amorphous materials is discussed and compared with conventional techniques using Filon's quadrature. The fast Fourier transform package described also includes cubic spline smoothing and has been extensively tested, using model data to which statistical errors have been added by means of a pseudo-random number generator with Gaussian shaper. Neither cubic spline nor hand smoothing has much effect on the resulting transform since the noise removed is of too high a frequency. (Auth.)

  4. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  5. Work Project Report by C. B. Jepsen

    CERN Document Server

    Jepsen, C B

    2014-01-01

    A series of measurements and fine-tunings as well as the development of a new fit routine has been developed for the hydrogen beam of ASACUSA in preparation for the antihydrogen experiment. The B-field amplitude of the microwave cavity of the hydrogen beamline has been maximized, and the cavity characterized with careful measurements at various frequencies and amplitudes. The new fit routine performs cubic spline interpolation of simulated data in order to generate fit functions. Since the simulated data are numerical solutions of the theoretical equations that describe the system, such fit functions are better suited than the previously used ones. By using the settings found to be optimal, a set of measurements of the Rabi oscillations of hydrogen atoms have been performed, which, when fitted with the new routine, yields a value of (1 420 405 756.9 +- 9.1) Hz for the hyperfine transition frequency.

  6. New families of interpolating type IIB backgrounds

    Science.gov (United States)

    Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

    2010-04-01

    We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

  7. Quadratic polynomial interpolation on triangular domain

    Science.gov (United States)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  8. Image Interpolation with Geometric Contour Stencils

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.

  9. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    Science.gov (United States)

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  10. Study on signal processing in Eddy current testing for defects in spline gear

    International Nuclear Information System (INIS)

    Lee, Jae Ho; Park, Tae Sug; Park, Ik Keun

    2016-01-01

    Eddy current testing (ECT) is commonly applied for the inspection of automated production lines of metallic products, because it has a high inspection speed and a reasonable price. When ECT is applied for the inspection of a metallic object having an uneven target surface, such as the spline gear of a spline shaft, it is difficult to distinguish between the original signal obtained from the sensor and the signal generated by a defect because of the relatively large surface signals having similar frequency distributions. To facilitate the detection of defect signals from the spline gear, implementation of high-order filters is essential, so that the fault signals can be distinguished from the surrounding noise signals, and simultaneously, the pass-band of the filter can be adjusted according to the status of each production line and the object to be inspected. We will examine the infinite impulse filters (IIR filters) available for implementing an advanced filter for ECT, and attempt to detect the flaw signals through optimization of system design parameters for detecting the signals at the system level

  11. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    Directory of Open Access Journals (Sweden)

    Eduardo Garza-Gisholt

    Full Text Available Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect

  12. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    Science.gov (United States)

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  13. A thin-plate spline analysis of the face and tongue in obstructive sleep apnea patients.

    Science.gov (United States)

    Pae, E K; Lowe, A A; Fleetham, J A

    1997-12-01

    The shape characteristics of the face and tongue in obstructive sleep apnea (OSA) patients were investigated using thin-plate (TP) splines. A relatively new analytic tool, the TP spline method, provides a means of size normalization and image analysis. When shape is one's main concern, various sizes of a biologic structure may be a source of statistical noise. More seriously, the strong size effect could mask underlying, actual attributes of the disease. A set of size normalized data in the form of coordinates was generated from cephalograms of 80 male subjects. The TP spline method envisioned the differences in the shape of the face and tongue between OSA patients and nonapneic subjects and those between the upright and supine body positions. In accordance with OSA severity, the hyoid bone and the submental region positioned inferiorly and the fourth vertebra relocated posteriorly with respect to the mandible. This caused a fanlike configuration of the lower part of the face and neck in the sagittal plane in both upright and supine body positions. TP splines revealed tongue deformations caused by a body position change. Overall, the new morphometric tool adopted here was found to be viable in the analysis of morphologic changes.

  14. Cubical sets as a classifying topos

    DEFF Research Database (Denmark)

    Spitters, Bas

    Coquand’s cubical set model for homotopy type theory provides the basis for a computational interpretation of the univalence axiom and some higher inductive types, as implemented in the cubical proof assistant. We show that the underlying cube category is the opposite of the Lawvere theory of De...... Morgan algebras. The topos of cubical sets itself classifies the theory of ‘free De Morgan algebras’. This provides us with a topos with an internal ‘interval’. Using this interval we construct a model of type theory following van den Berg and Garner. We are currently investigating the precise relation...

  15. Generalized Vaidya spacetime for cubic gravity

    Science.gov (United States)

    Ruan, Shan-Ming

    2016-03-01

    We present a kind of generalized Vaidya solution of a new cubic gravity in five dimensions whose field equations in spherically symmetric spacetime are always second order like the Lovelock gravity. We also study the thermodynamics of its spherically symmetric apparent horizon and get its entropy expression and generalized Misner-Sharp energy. Finally, we present the first law and second law hold in this gravity. Although all the results are analogous to those in Lovelock gravity, we in fact introduce the contribution of a new cubic term in five dimensions where the cubic Lovelock term is just zero.

  16. Application of ordinary kriging for interpolation of micro-structured technical surfaces

    International Nuclear Information System (INIS)

    Raid, Indek; Kusnezowa, Tatjana; Seewig, Jörg

    2013-01-01

    Kriging is an interpolation technique used in geostatistics. In this paper we present kriging applied in the field of three-dimensional optical surface metrology. Technical surfaces are not always optically cooperative, meaning that measurements of technical surfaces contain invalid data points because of different effects. These data points need to be interpolated to obtain a complete area in order to fulfil further processing. We present an elementary type of kriging, known as ordinary kriging, and apply it to interpolate measurements of different technical surfaces containing different kinds of realistic defects. The result of the interpolation with kriging is compared to six common interpolation techniques: nearest neighbour, natural neighbour, inverse distance to a power, triangulation with linear interpolation, modified Shepard's method and radial basis function. In order to quantify the results of different interpolations, the topographies are compared to defect-free reference topographies. Kriging is derived from a stochastic model that suggests providing an unbiased, linear estimation with a minimized error variance. The estimation with kriging is based on a preceding statistical analysis of the spatial structure of the surface. This comprises the choice and adaptation of specific models of spatial continuity. In contrast to common methods, kriging furthermore considers specific anisotropy in the data and adopts the interpolation accordingly. The gained benefit requires some additional effort in preparation and makes the overall estimation more time-consuming than common methods. However, the adaptation to the data makes this method very flexible and accurate. (paper)

  17. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  18. Wideband DOA Estimation through Projection Matrix Interpolation

    OpenAIRE

    Selva, J.

    2017-01-01

    This paper presents a method to reduce the complexity of the deterministic maximum likelihood (DML) estimator in the wideband direction-of-arrival (WDOA) problem, which is based on interpolating the array projection matrix in the temporal frequency variable. It is shown that an accurate interpolator like Chebyshev's is able to produce DML cost functions comprising just a few narrowband-like summands. Actually, the number of such summands is far smaller (roughly by factor ten in the numerical ...

  19. Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation

    Directory of Open Access Journals (Sweden)

    Ahmad Bilfarsah

    2005-04-01

    Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.

  20. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    Science.gov (United States)

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  1. Data assimilation using Bayesian filters and B-spline geological models

    KAUST Repository

    Duan, Lian

    2011-04-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  2. Data assimilation using Bayesian filters and B-spline geological models

    International Nuclear Information System (INIS)

    Duan Lian; Farmer, Chris; Hoteit, Ibrahim; Luo Xiaodong; Moroz, Irene

    2011-01-01

    This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir.

  3. Interpolation in Spaces of Functions

    Directory of Open Access Journals (Sweden)

    K. Mosaleheh

    2006-03-01

    Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces

  4. PEMODELAN REGRESI SPLINE (Studi Kasus: Herpindo Jaya Cabang Ngaliyan

    Directory of Open Access Journals (Sweden)

    I MADE BUDIANTARA PUTRA

    2015-06-01

    Full Text Available Regression analysis is a method of data analysis to describe the relationship between response variables and predictor variables. There are two approaches to estimating the regression function. They are parametric and nonparametric approaches. The parametric approach is used when the relationship between the predictor variables and the response variables are known or the shape of the regression curve is known. Meanwhile, the nonparametric approach is used when the form of the relationship between the response and predictor variables is unknown or no information about the form of the regression function. The aim of this study are to determine the best spline nonparametric regression model on data of quality of the product, price, and advertising on purchasing decisions of Yamaha motorcycle with optimal knots point and to compare it with the multiple regression linear based on the coefficient of determination (R2 and mean square error (MSE. Optimal knot points are defined by two point knots. The result of this analysis is that for this data multiple regression linear is better than the spline regression one.

  5. Abstract interpolation in vector-valued de Branges-Rovnyak spaces

    NARCIS (Netherlands)

    Ball, J.A.; Bolotnikov, V.; ter Horst, S.

    2011-01-01

    Following ideas from the Abstract Interpolation Problem of Katsnelson et al. (Operators in spaces of functions and problems in function theory, vol 146, pp 83–96, Naukova Dumka, Kiev, 1987) for Schur class functions, we study a general metric constrained interpolation problem for functions from a

  6. Validation of DWI pre-processing procedures for reliable differentiation between human brain gliomas.

    Science.gov (United States)

    Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.

  7. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  8. Data interpolation for vibration diagnostics using two-variable correlations

    International Nuclear Information System (INIS)

    Branagan, L.

    1991-01-01

    This paper reports that effective machinery vibration diagnostics require a clear differentiation between normal vibration changes caused by plant process conditions and those caused by degradation. The normal relationship between vibration and a process parameter can be quantified by developing the appropriate correlation. The differences in data acquisition requirements between dynamic signals (vibration spectra) and static signals (pressure, temperature, etc.) result in asynchronous data acquisition; the development of any correlation must then be based on some form of interpolated data. This interpolation can reproduce or distort the original measured quantity depending on the characteristics of the data and the interpolation technique. Relevant data characteristics, such as acquisition times, collection cycle times, compression method, storage rate, and the slew rate of the measured variable, are dependent both on the data handling and on the measured variable. Linear and staircase interpolation, along with the use of clustering and filtering, provide the necessary options to develop accurate correlations. The examples illustrate the appropriate application of these options

  9. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    Science.gov (United States)

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  10. Spinning solitons in cubic-quintic nonlinear media

    Indian Academy of Sciences (India)

    Spinning solitons in cubic-quintic nonlinear media ... features of families of bright vortex solitons (doughnuts, or 'spinning' solitons) in both conservative and dissipative cubic-quintic nonlinear media. ... Pramana – Journal of Physics | News.

  11. Counterexamples to the B-spline Conjecture for Gabor Frames

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Nielsen, Kamilla Haahr

    2016-01-01

    The frame set conjecture for B-splines Bn, n≥2, states that the frame set is the maximal set that avoids the known obstructions. We show that any hyperbola of the form ab=r, where r is a rational number smaller than one and a and b denote the sampling and modulation rates, respectively, has infin...

  12. Fingerprint Matching by Thin-plate Spline Modelling of Elastic Deformations

    NARCIS (Netherlands)

    Bazen, A.M.; Gerez, Sabih H.

    2003-01-01

    This paper presents a novel minutiae matching method that describes elastic distortions in fingerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the fingerprints according to the estimated model, the number of matching

  13. Motion characteristic between die and workpiece in spline rolling process with round dies

    Directory of Open Access Journals (Sweden)

    Da-Wei Zhang

    2016-06-01

    Full Text Available In the spline rolling process with round dies, additional kinematic compensation is an essential mechanism for improving the division of teeth and pitch accuracy as well as surface quality. The motion characteristic between the die and workpiece under varied center distance in the spline rolling process was investigated. Mathematical models of the instantaneous center of rotation, transmission ratio, and centrodes in the rolling process were established. The models were used to analyze the rolling process of the involute spline with circular dedendum, and the results indicated that (1 with the reduction in the center distance, the instantaneous center moves toward workpiece, and the transmission ratio increases at first and then decreases; (2 the variations in the instantaneous center and transmission ratio are discontinuous, presenting an interruption when the involute flank begins to be formed; (3 the change in transmission ratio at the forming stage of the workpiece with the involute flank can be negligible; and (4 the centrode of the workpiece is an Archimedes line whose polar radius reduces, and the centrode of the rolling die is similar to Archimedes line when the workpiece is with the involute flank.

  14. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    Science.gov (United States)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  15. Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  16. Potential of deterministic and geostatistical rainfall interpolation under high rainfall variability and dry spells: case of Kenya's Central Highlands

    Science.gov (United States)

    Kisaka, M. Oscar; Mucheru-Muna, M.; Ngetich, F. K.; Mugwe, J.; Mugendi, D.; Mairura, F.; Shisanya, C.; Makokha, G. L.

    2016-04-01

    Drier parts of Kenya's Central Highlands endure persistent crop failure and declining agricultural productivity. These have, in part, attributed to high temperatures, prolonged dry spells and erratic rainfall. Understanding spatial-temporal variability of climatic indices such as rainfall at seasonal level is critical for optimal rain-fed agricultural productivity and natural resource management in the study area. However, the predominant setbacks in analysing hydro-meteorological events are occasioned by either lack, inadequate, or inconsistent meteorological data. Like in most other places, the sole sources of climatic data in the study region are scarce and only limited to single stations, yet with persistent missing/unrecorded data making their utilization a challenge. This study examined seasonal anomalies and variability in rainfall, drought occurrence and the efficacy of interpolation techniques in the drier regions of eastern Kenyan. Rainfall data from five stations (Machang'a, Kiritiri, Kiambere and Kindaruma and Embu) were sourced from both the Kenya Meteorology Department and on-site primary recording. Owing to some experimental work ongoing, automated recording for primary dailies in Machang'a have been ongoing since the year 2000 to date; thus, Machang'a was treated as reference (for period of record) station for selection of other stations in the region. The other stations had data sets of over 15 years with missing data of less than 10 % as required by the world meteorological organization whose quality check is subject to the Centre for Climate Systems Modeling (C2SM) through MeteoSwiss and EMPA bodies. The dailies were also subjected to homogeneity testing to evaluate whether they came from the same population. Rainfall anomaly index, coefficients of variance and probability were utilized in the analyses of rainfall variability. Spline, kriging and inverse distance weighting interpolation techniques were assessed using daily rainfall data and

  17. Interpolation from Grid Lines: Linear, Transfinite and Weighted Method

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2017-01-01

    When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...

  18. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  19. Some observations on interpolating gauges and non-covariant gauges

    Indian Academy of Sciences (India)

    We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...

  20. Mandibular transformations in prepubertal patients following treatment for craniofacial microsomia: thin-plate spline analysis.

    Science.gov (United States)

    Hay, A D; Singh, G D

    2000-01-01

    To analyze correction of mandibular deformity using an inverted L osteotomy and autogenous bone graft in patients exhibiting unilateral craniofacial microsomia (CFM), thin-plate spline analysis was undertaken. Preoperative, early postoperative, and approximately 3.5-year postoperative posteroanterior cephalographs of 15 children (age 10+/-3 years) with CFM were scanned, and eight homologous mandibular landmarks digitized. Average mandibular geometries, scaled to an equivalent size, were generated using Procrustes superimposition. Results indicated that the mean pre- and postoperative mandibular configurations differed statistically (PThin-plate spline analysis indicated that the total spline (Cartesian transformation grid) of the pre- to early postoperative configuration showed mandibular body elongation on the treated side and inferior symphyseal displacement. The affine component of the total spline revealed a clockwise rotation of the preoperative configuration, whereas the nonaffine component was responsible for ramus, body, and symphyseal displacements. The transformation grid for the early and late postoperative comparison showed bilateral ramus elongation. A superior symphyseal displacement contrasted with its earlier inferior displacement, the affine component had translocated the symphyseal landmarks towards the midline. The nonaffine component demonstrated bilateral ramus lengthening, and partial warps suggested that these elongations were slightly greater on the nontreated side. The affine component of the pre- and late postoperative comparison also demonstrated a clockwise rotation. The nonaffine component produced the bilateral ramus elongations-the nontreated side ramus lengthening slightly more than the treated side. It is concluded that an inverted L osteotomy improves mandibular morphology significantly in CFM patients and permits continued bilateral ramus growth. Copyright 2000 Wiley-Liss, Inc.